[jira] [Commented] (HDFS-10360) DataNode may format directory and lose blocks if current/VERSION is missing

2016-05-16 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284888#comment-15284888
 ] 

Lei (Eddy) Xu commented on HDFS-10360:
--

Hi, [~jojochuang] Thanks a lot for this finding. It exposes risk to lose data. 
The code LGTM.

Would you mind to add a functional tests that starts a {{MiniDFSCluster}} with 
such a "corrupted" data dir on one DN, and
* Set {{volFailuresTolerated}}
* Make sure that the failure volumes can be detected from DN JMX and NN JMX.


> DataNode may format directory and lose blocks if current/VERSION is missing
> ---
>
> Key: HDFS-10360
> URL: https://issues.apache.org/jira/browse/HDFS-10360
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10360.001.patch, HDFS-10360.002.patch, 
> HDFS-10360.003.patch, HDFS-10360.004.patch, HDFS-10360.004.patch, 
> HDFS-10360.005.patch
>
>
> Under certain circumstances, if the current/VERSION of a storage directory is 
> missing, DataNode may format the storage directory even though _block files 
> are not missing_.
> This is very easy to reproduce. Simply launch a HDFS cluster and create some 
> files. Delete current/VERSION, and restart the data node.
> After the restart, the data node will format the directory and remove all 
> existing block files:
> {noformat}
> 2016-05-03 12:57:15,387 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Lock on /data/dfs/dn/in_use.lock acquired by nodename 
> 5...@weichiu-dn-2.vpc.cloudera.com
> 2016-05-03 12:57:15,389 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Storage directory /data/dfs/dn is not formatted for 
> BP-787466439-172.26.24.43-1462305406642
> 2016-05-03 12:57:15,389 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting ...
> 2016-05-03 12:57:15,464 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Analyzing storage directories for bpid BP-787466439-172.26.24.43-1462305406642
> 2016-05-03 12:57:15,464 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Locking is disabled for 
> /data/dfs/dn/current/BP-787466439-172.26.24.43-1462305406642
> 2016-05-03 12:57:15,465 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Block pool storage directory 
> /data/dfs/dn/current/BP-787466439-172.26.24.43-1462305406642 is not formatted 
> for BP-787466439-172
> .26.24.43-1462305406642
> 2016-05-03 12:57:15,465 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting ...
> 2016-05-03 12:57:15,465 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting block pool BP-787466439-172.26.24.43-1462305406642 directory 
> /data/dfs/dn/current/BP-787466439-172.26.24.43-1462305406642/current
> {noformat}
> The bug is: DataNode assumes that if none of {{current/VERSION}}, 
> {{previous/}}, {{previous.tmp/}}, {{removed.tmp/}}, {{finalized.tmp/}} and 
> {{lastcheckpoint.tmp/}} exists, the storage directory contains nothing 
> important to HDFS and decides to format it. 
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java#L526-L545
> However, block files may still exist, and in my opinion, we should do 
> everything possible to retain the block files.
> I have two suggestions:
> # check if {{current/}} directory is empty. If not, throw an 
> InconsistentFSStateException in {{Storage#analyzeStorage}} instead of 
> asumming its not formatted. Or,
> # In {{Storage#clearDirectory}}, before it formats the storage directory, 
> rename or move {{current/}} directory. Also, log whatever is being 
> renamed/moved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10410) RedundantEditLogInputStream#LOG is set to wrong class

2016-05-16 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10410:
--
Attachment: HADOOP-10410.001.patch

Patch 001:
* Set {{RedundantEditLogInputStream#LOG}} to its own class

> RedundantEditLogInputStream#LOG is set to wrong class
> -
>
> Key: HDFS-10410
> URL: https://issues.apache.org/jira/browse/HDFS-10410
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HADOOP-10410.001.patch
>
>
> Found the issue while analyzing a log message that points to the wrong class.
> {code}
> class RedundantEditLogInputStream extends EditLogInputStream {
>   public static final Log LOG = 
> LogFactory.getLog(EditLogInputStream.class.getName());
> {code}
> should be changed to:
> {code}
>   public static final Log LOG = 
> LogFactory.getLog(RedundantEditLogInputStream.class.getName());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-10410) RedundantEditLogInputStream#LOG is set to wrong class

2016-05-16 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-10410 started by John Zhuge.
-
> RedundantEditLogInputStream#LOG is set to wrong class
> -
>
> Key: HDFS-10410
> URL: https://issues.apache.org/jira/browse/HDFS-10410
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>
> Found the issue while analyzing a log message that points to the wrong class.
> {code}
> class RedundantEditLogInputStream extends EditLogInputStream {
>   public static final Log LOG = 
> LogFactory.getLog(EditLogInputStream.class.getName());
> {code}
> should be changed to:
> {code}
>   public static final Log LOG = 
> LogFactory.getLog(RedundantEditLogInputStream.class.getName());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9732) Remove DelegationTokenIdentifier.toString() —for better logging output

2016-05-16 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284813#comment-15284813
 ] 

Chris Nauroth commented on HDFS-9732:
-

+1 from me too.  [~yzhangal], thank you.

> Remove DelegationTokenIdentifier.toString() —for better logging output
> --
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
> Attachments: HADOOP-12752-001.patch, HDFS-9732-000.patch, 
> HDFS-9732.001.patch, HDFS-9732.002.patch, HDFS-9732.003.patch, 
> HDFS-9732.004.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10410) RedundantEditLogInputStream#LOG is set to wrong class

2016-05-16 Thread John Zhuge (JIRA)
John Zhuge created HDFS-10410:
-

 Summary: RedundantEditLogInputStream#LOG is set to wrong class
 Key: HDFS-10410
 URL: https://issues.apache.org/jira/browse/HDFS-10410
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Minor


Found the issue while analyzing a log message that points to the wrong class.

{code}
class RedundantEditLogInputStream extends EditLogInputStream {
  public static final Log LOG = 
LogFactory.getLog(EditLogInputStream.class.getName());
{code}
should be changed to:
{code}
  public static final Log LOG = 
LogFactory.getLog(RedundantEditLogInputStream.class.getName());
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10360) DataNode may format directory and lose blocks if current/VERSION is missing

2016-05-16 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284827#comment-15284827
 ] 

Wei-Chiu Chuang commented on HDFS-10360:


Test failures are unrelated, and tests passed in my tree.

> DataNode may format directory and lose blocks if current/VERSION is missing
> ---
>
> Key: HDFS-10360
> URL: https://issues.apache.org/jira/browse/HDFS-10360
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10360.001.patch, HDFS-10360.002.patch, 
> HDFS-10360.003.patch, HDFS-10360.004.patch, HDFS-10360.004.patch, 
> HDFS-10360.005.patch
>
>
> Under certain circumstances, if the current/VERSION of a storage directory is 
> missing, DataNode may format the storage directory even though _block files 
> are not missing_.
> This is very easy to reproduce. Simply launch a HDFS cluster and create some 
> files. Delete current/VERSION, and restart the data node.
> After the restart, the data node will format the directory and remove all 
> existing block files:
> {noformat}
> 2016-05-03 12:57:15,387 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Lock on /data/dfs/dn/in_use.lock acquired by nodename 
> 5...@weichiu-dn-2.vpc.cloudera.com
> 2016-05-03 12:57:15,389 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Storage directory /data/dfs/dn is not formatted for 
> BP-787466439-172.26.24.43-1462305406642
> 2016-05-03 12:57:15,389 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting ...
> 2016-05-03 12:57:15,464 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Analyzing storage directories for bpid BP-787466439-172.26.24.43-1462305406642
> 2016-05-03 12:57:15,464 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Locking is disabled for 
> /data/dfs/dn/current/BP-787466439-172.26.24.43-1462305406642
> 2016-05-03 12:57:15,465 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Block pool storage directory 
> /data/dfs/dn/current/BP-787466439-172.26.24.43-1462305406642 is not formatted 
> for BP-787466439-172
> .26.24.43-1462305406642
> 2016-05-03 12:57:15,465 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting ...
> 2016-05-03 12:57:15,465 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting block pool BP-787466439-172.26.24.43-1462305406642 directory 
> /data/dfs/dn/current/BP-787466439-172.26.24.43-1462305406642/current
> {noformat}
> The bug is: DataNode assumes that if none of {{current/VERSION}}, 
> {{previous/}}, {{previous.tmp/}}, {{removed.tmp/}}, {{finalized.tmp/}} and 
> {{lastcheckpoint.tmp/}} exists, the storage directory contains nothing 
> important to HDFS and decides to format it. 
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java#L526-L545
> However, block files may still exist, and in my opinion, we should do 
> everything possible to retain the block files.
> I have two suggestions:
> # check if {{current/}} directory is empty. If not, throw an 
> InconsistentFSStateException in {{Storage#analyzeStorage}} instead of 
> asumming its not formatted. Or,
> # In {{Storage#clearDirectory}}, before it formats the storage directory, 
> rename or move {{current/}} directory. Also, log whatever is being 
> renamed/moved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10410) RedundantEditLogInputStream#LOG is set to wrong class

2016-05-16 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10410:
--
   Labels: supportability  (was: )
Affects Version/s: 2.6.0
 Target Version/s: 2.8.0
   Status: Patch Available  (was: In Progress)

> RedundantEditLogInputStream#LOG is set to wrong class
> -
>
> Key: HDFS-10410
> URL: https://issues.apache.org/jira/browse/HDFS-10410
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Attachments: HADOOP-10410.001.patch
>
>
> Found the issue while analyzing a log message that points to the wrong class.
> {code}
> class RedundantEditLogInputStream extends EditLogInputStream {
>   public static final Log LOG = 
> LogFactory.getLog(EditLogInputStream.class.getName());
> {code}
> should be changed to:
> {code}
>   public static final Log LOG = 
> LogFactory.getLog(RedundantEditLogInputStream.class.getName());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10276) HDFS throws AccessControlException when checking for the existence of /a/b when /a is a file

2016-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284500#comment-15284500
 ] 

Hadoop QA commented on HDFS-10276:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 7s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 39s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 187m 50s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot |
|   | hadoop.hdfs.server.namenode.TestNameNodeRespectsBindHostKeys |
|   | hadoop.hdfs.TestAsyncDFSRename |
| JDK v1.8.0_91 Timed out junit tests | 
org.apache.hadoop.hdfs.TestDecommissionWithStriped |
| JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.TestPersistBlocks |
|   | hadoop.hdfs.server.namenode.TestNameNodeRespectsBindHostKeys |
|   | hadoop.hdfs.TestAsyncDFSRename |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804150/HDFS-10276.005.patch |
| JIRA Issue | 

[jira] [Updated] (HDFS-10400) hdfs dfs -put exits with zero on error

2016-05-16 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10400:
--
Assignee: Yiqun Lin

> hdfs dfs -put exits with zero on error
> --
>
> Key: HDFS-10400
> URL: https://issues.apache.org/jira/browse/HDFS-10400
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jo Desmet
>Assignee: Yiqun Lin
>
> On a filesystem that is about to fill up, execute "hdfs dfs -put" for a file 
> that is big enough to go over the limit. As a result, the command fails with 
> an exception, however the command terminates normally (exit code 0).
> Expectation is that any detectable failure generates an exit code different 
> than zero.
> Documentation on 
> https://hadoop.apache.org/docs/r1.2.1/file_system_shell.html#put states:
> Exit Code:
> Returns 0 on success and -1 on error. 
> following is the exception generated: 
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Exception in createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
> at 
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2282)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1352)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1271)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Abandoning 
> BP-1964113808-130.8.138.99-1446787670498:blk_1073835906_95114
> 16/05/11 13:37:08 INFO hdfs.DFSClient: Excluding datanode 
> DatanodeInfoWithStorage[130.8.138.99:50010,DS-eed7039a-8031-499e-85a5-7216b9d766a8,DISK]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10404) CacheAdmin command usage message not shows completely

2016-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284679#comment-15284679
 ] 

Hadoop QA commented on HDFS-10404:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 50s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 38s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 25s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 193m 38s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.TestAsyncDFSRename |
| JDK v1.8.0_91 Timed out junit tests | 
org.apache.hadoop.hdfs.TestLeaseRecovery2 |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | hadoop.hdfs.TestAsyncDFSRename |
\\
\\
|| 

[jira] [Commented] (HDFS-2173) saveNamespace should not throw IOE when only one storage directory fails to write VERSION file

2016-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284499#comment-15284499
 ] 

Hadoop QA commented on HDFS-2173:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s 
{color} | {color:green} trunk passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 27s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 57m 12s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
30s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 146m 27s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | 
hadoop.hdfs.shortcircuit.TestShortCircuitCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804155/HDFS-2173.04.patch |
| JIRA Issue | HDFS-2173 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e2fb061ef024 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk 

[jira] [Commented] (HDFS-10188) libhdfs++: Implement debug allocators

2016-05-16 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284723#comment-15284723
 ] 

James Clampffer commented on HDFS-10188:


Macro idea looks good to me.  If you do this method you can actually determine 
the size at compile time, at least for operator new/delete (not vectorized 
operators).

{code}
static void operator delete(void* p) { \
  mem_struct* header = (mem_struct*)p; \
  size_t size = (--header)->mem_size; \
  ::memset(p, 0, size); \
  ::free(header); \
} \
{code}

Since it's just text manipulation decltype(this) is a valid expression assuming 
the macro is expanded in a struct or class.  We can use that for the size.
{code}
static void operator delete(void* p) { \
  ::memset(p, 0, sizeof( decltype(this) )); \
  ::free(p); \
} \
{code}
It's slightly less expensive and it avoids the pointer arithmetic that could 
lead to endianness issues.  I'm not sure if it's possible to do something 
analogous for new[]/delete[].  If there's some other reasons to keep the header 
tag around I'm fine with that too.

> libhdfs++: Implement debug allocators
> -
>
> Key: HDFS-10188
> URL: https://issues.apache.org/jira/browse/HDFS-10188
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-10188.HDFS-8707.000.patch, 
> HDFS-10188.HDFS-8707.001.patch, HDFS-10188.HDFS-8707.002.patch
>
>
> I propose implementing a set of memory new/delete pairs with additional 
> checking to detect double deletes, read-after-delete, and write-after-deletes 
> to help debug resource ownership issues and prevent new ones from entering 
> the library.
> One of the most common issues we have is use-after-free issues.  The 
> continuation pattern makes these really tricky to debug because by the time a 
> segsegv is raised the context of what has caused the error is long gone.
> The plan is to add allocators that can be turned on that can do the 
> following, in order of runtime cost.
> 1: no-op, forward through to default new/delete
> 2: make sure the memory given to the constructor is dirty, memset free'd 
> memory to 0
> 3: implement operator new with mmap, lock that region of memory once it's 
> been deleted; obviously this can't be left to run forever because the memory 
> is never unmapped
> This should also put some groundwork in place for implementing specialized 
> allocators for tiny objects that we churn through like std::string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10400) hdfs dfs -put exits with zero on error

2016-05-16 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284641#comment-15284641
 ] 

Kihwal Lee commented on HDFS-10400:
---

[~linyiqun], thanks for working in this. I assigned it to you.

> hdfs dfs -put exits with zero on error
> --
>
> Key: HDFS-10400
> URL: https://issues.apache.org/jira/browse/HDFS-10400
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jo Desmet
>Assignee: Yiqun Lin
>
> On a filesystem that is about to fill up, execute "hdfs dfs -put" for a file 
> that is big enough to go over the limit. As a result, the command fails with 
> an exception, however the command terminates normally (exit code 0).
> Expectation is that any detectable failure generates an exit code different 
> than zero.
> Documentation on 
> https://hadoop.apache.org/docs/r1.2.1/file_system_shell.html#put states:
> Exit Code:
> Returns 0 on success and -1 on error. 
> following is the exception generated: 
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Exception in createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
> at 
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2282)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1352)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1271)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Abandoning 
> BP-1964113808-130.8.138.99-1446787670498:blk_1073835906_95114
> 16/05/11 13:37:08 INFO hdfs.DFSClient: Excluding datanode 
> DatanodeInfoWithStorage[130.8.138.99:50010,DS-eed7039a-8031-499e-85a5-7216b9d766a8,DISK]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10360) DataNode may format directory and lose blocks if current/VERSION is missing

2016-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284691#comment-15284691
 ] 

Hadoop QA commented on HDFS-10360:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
49s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: patch generated 0 
new + 115 unchanged - 3 fixed = 115 total (was 118) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 32s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 16s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 135m 9s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
| JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.server.namenode.TestFSImage |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804166/HDFS-10360.005.patch |
| JIRA Issue | HDFS-10360 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9661d7a200f0 

[jira] [Commented] (HDFS-10303) DataStreamer#ResponseProcessor calculate packet acknowledge duration wrongly.

2016-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284671#comment-15284671
 ] 

Hadoop QA commented on HDFS-10303:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s 
{color} | {color:green} trunk passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 24s 
{color} | {color:green} trunk passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 34s 
{color} | {color:red} hadoop-hdfs-project: patch generated 5 new + 77 unchanged 
- 1 fixed = 82 total (was 78) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 20s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 54s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 31s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 59s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 41s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 

[jira] [Commented] (HDFS-9732) Remove DelegationTokenIdentifier.toString() —for better logging output

2016-05-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284487#comment-15284487
 ] 

Steve Loughran commented on HDFS-9732:
--

I can commit this to trunk; have you tested it in branch-2 yet? 

> Remove DelegationTokenIdentifier.toString() —for better logging output
> --
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
> Attachments: HADOOP-12752-001.patch, HDFS-9732-000.patch, 
> HDFS-9732.001.patch, HDFS-9732.002.patch, HDFS-9732.003.patch, 
> HDFS-9732.004.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9732) Remove DelegationTokenIdentifier.toString() —for better logging output

2016-05-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284485#comment-15284485
 ] 

Steve Loughran commented on HDFS-9732:
--

+1 from me then

> Remove DelegationTokenIdentifier.toString() —for better logging output
> --
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
> Attachments: HADOOP-12752-001.patch, HDFS-9732-000.patch, 
> HDFS-9732.001.patch, HDFS-9732.002.patch, HDFS-9732.003.patch, 
> HDFS-9732.004.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10408) Add tests for out-of-order asynchronous rename/setPermission/setOwner

2016-05-16 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10408:
-
Status: Patch Available  (was: Open)

> Add tests for out-of-order asynchronous rename/setPermission/setOwner
> -
>
> Key: HDFS-10408
> URL: https://issues.apache.org/jira/browse/HDFS-10408
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
> Attachments: HDFS-10408-HDFS-9924.000.patch
>
>
> HDFS-10224 and HDFS-10346 mostly test the batch style async request/response. 
> Out-of-order case should also be tested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10383) Safely close resources in DFSTestUtil

2016-05-16 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10383:
-
Attachment: HDFS-10383.003.patch

Thank you [~walter.k.su] for your helpful suggestion.

Although it's not strictly related to the switch to try-with-resource 
statement, I think it's still a case of closing the resources safely (as this 
jira's title indicates). As a result, I think it's doable in this jira. I'll 
update the description of this jira slightly in case someone, who skips the 
discussion in comment section, blames us. :)

I like the idea of calling the create RPC directly instead of wrapping a output 
stream after creating the file implicitly. In this test helper method, we don't 
really operate the stream for write, and thus wrapping a stream is a burden 
instead of a must-do-work which is to protect the block data's integrity.

Attached v3 patch.

> Safely close resources in DFSTestUtil
> -
>
> Key: HDFS-10383
> URL: https://issues.apache.org/jira/browse/HDFS-10383
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10383.000.patch, HDFS-10383.001.patch, 
> HDFS-10383.002.patch, HDFS-10383.003.patch
>
>
> There are a few of methods in {{DFSTestUtil}} that do not close the resource 
> safely, or elegantly. We can use the try-with-resource statement to address 
> this problem.
> Specially, as {{DFSTestUtil}} is popularly used in test, we need to preserve 
> any exceptions thrown during the processing of the resource while still 
> guaranteeing it's closed finally. Take for example,the current implementation 
> of {{DFSTestUtil#createFile()}} closes the FSDataOutputStream in the 
> {{finally}} block, and when closing if the internal 
> {{DFSOutputStream#close()}} throws any exception, which it often does, the 
> exception thrown during the processing will be lost. See this [test 
> failure|https://builds.apache.org/job/PreCommit-HADOOP-Build/9320/testReport/org.apache.hadoop.hdfs/TestAsyncDFSRename/testAggressiveConcurrentAsyncRenameWithOverwrite/],
>  and we have to guess what was the root cause.
> Using try-with-resource, we can close the resources safely, and the 
> exceptions thrown both in processing and closing will be available (closing 
> exception will be suppressed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10408) Add tests for out-of-order asynchronous rename/setPermission/setOwner

2016-05-16 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10408:
-
Attachment: HDFS-10408-HDFS-9924.000.patch

v000 patch added some tests to simulate out-of-order retrievals of responses.

> Add tests for out-of-order asynchronous rename/setPermission/setOwner
> -
>
> Key: HDFS-10408
> URL: https://issues.apache.org/jira/browse/HDFS-10408
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
> Attachments: HDFS-10408-HDFS-9924.000.patch
>
>
> HDFS-10224 and HDFS-10346 mostly test the batch style async request/response. 
> Out-of-order case should also be tested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10383) Safely close resources in DFSTestUtil

2016-05-16 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-10383:
--
Description: 
There are a few of methods in {{DFSTestUtil}} that do not close the resource 
safely, or elegantly. We can use the try-with-resource statement to address 
this problem.

Specially, as {{DFSTestUtil}} is popularly used in test, we need to preserve 
any exceptions thrown during the processing of the resource while still 
guaranteeing it's closed finally. Take for example,the current implementation 
of {{DFSTestUtil#createFile()}} closes the FSDataOutputStream in the 
{{finally}} block, and when closing if the internal {{DFSOutputStream#close()}} 
throws any exception, which it often does, the exception thrown during the 
processing will be lost. See this [test 
failure|https://builds.apache.org/job/PreCommit-HADOOP-Build/9320/testReport/org.apache.hadoop.hdfs/TestAsyncDFSRename/testAggressiveConcurrentAsyncRenameWithOverwrite/],
 and we have to guess what was the root cause.

Using try-with-resource, we can close the resources safely, and the exceptions 
thrown both in processing and closing will be available (closing exception will 
be suppressed). Besides the try-with-resource, if a stream is not necessary, 
don't create/close it.


  was:
There are a few of methods in {{DFSTestUtil}} that do not close the resource 
safely, or elegantly. We can use the try-with-resource statement to address 
this problem.

Specially, as {{DFSTestUtil}} is popularly used in test, we need to preserve 
any exceptions thrown during the processing of the resource while still 
guaranteeing it's closed finally. Take for example,the current implementation 
of {{DFSTestUtil#createFile()}} closes the FSDataOutputStream in the 
{{finally}} block, and when closing if the internal {{DFSOutputStream#close()}} 
throws any exception, which it often does, the exception thrown during the 
processing will be lost. See this [test 
failure|https://builds.apache.org/job/PreCommit-HADOOP-Build/9320/testReport/org.apache.hadoop.hdfs/TestAsyncDFSRename/testAggressiveConcurrentAsyncRenameWithOverwrite/],
 and we have to guess what was the root cause.

Using try-with-resource, we can close the resources safely, and the exceptions 
thrown both in processing and closing will be available (closing exception will 
be suppressed).



> Safely close resources in DFSTestUtil
> -
>
> Key: HDFS-10383
> URL: https://issues.apache.org/jira/browse/HDFS-10383
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10383.000.patch, HDFS-10383.001.patch, 
> HDFS-10383.002.patch, HDFS-10383.003.patch
>
>
> There are a few of methods in {{DFSTestUtil}} that do not close the resource 
> safely, or elegantly. We can use the try-with-resource statement to address 
> this problem.
> Specially, as {{DFSTestUtil}} is popularly used in test, we need to preserve 
> any exceptions thrown during the processing of the resource while still 
> guaranteeing it's closed finally. Take for example,the current implementation 
> of {{DFSTestUtil#createFile()}} closes the FSDataOutputStream in the 
> {{finally}} block, and when closing if the internal 
> {{DFSOutputStream#close()}} throws any exception, which it often does, the 
> exception thrown during the processing will be lost. See this [test 
> failure|https://builds.apache.org/job/PreCommit-HADOOP-Build/9320/testReport/org.apache.hadoop.hdfs/TestAsyncDFSRename/testAggressiveConcurrentAsyncRenameWithOverwrite/],
>  and we have to guess what was the root cause.
> Using try-with-resource, we can close the resources safely, and the 
> exceptions thrown both in processing and closing will be available (closing 
> exception will be suppressed). Besides the try-with-resource, if a stream is 
> not necessary, don't create/close it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10383) Safely close resources in DFSTestUtil

2016-05-16 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285051#comment-15285051
 ] 

Arpit Agarwal commented on HDFS-10383:
--

[~liuml07] I added you as a contributor. Please try again and let me know if it 
doesn't work.

> Safely close resources in DFSTestUtil
> -
>
> Key: HDFS-10383
> URL: https://issues.apache.org/jira/browse/HDFS-10383
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10383.000.patch, HDFS-10383.001.patch, 
> HDFS-10383.002.patch, HDFS-10383.003.patch
>
>
> There are a few of methods in {{DFSTestUtil}} that do not close the resource 
> safely, or elegantly. We can use the try-with-resource statement to address 
> this problem.
> Specially, as {{DFSTestUtil}} is popularly used in test, we need to preserve 
> any exceptions thrown during the processing of the resource while still 
> guaranteeing it's closed finally. Take for example,the current implementation 
> of {{DFSTestUtil#createFile()}} closes the FSDataOutputStream in the 
> {{finally}} block, and when closing if the internal 
> {{DFSOutputStream#close()}} throws any exception, which it often does, the 
> exception thrown during the processing will be lost. See this [test 
> failure|https://builds.apache.org/job/PreCommit-HADOOP-Build/9320/testReport/org.apache.hadoop.hdfs/TestAsyncDFSRename/testAggressiveConcurrentAsyncRenameWithOverwrite/],
>  and we have to guess what was the root cause.
> Using try-with-resource, we can close the resources safely, and the 
> exceptions thrown both in processing and closing will be available (closing 
> exception will be suppressed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10408) Add tests for out-of-order asynchronous rename/setPermission/setOwner

2016-05-16 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285028#comment-15285028
 ] 

Xiaobing Zhou commented on HDFS-10408:
--

1. especially see testConservativeOutOfOrderResponse and 
testAggressiveOutOfOrderResponse
2. testConservativeConcurrentAsyncAPI and testAggressiveConcurrentAsyncAPI are 
renamed to testConservativeBatchAsyncAPI and testAggressiveBatchAsyncAPI, 
respectively.

> Add tests for out-of-order asynchronous rename/setPermission/setOwner
> -
>
> Key: HDFS-10408
> URL: https://issues.apache.org/jira/browse/HDFS-10408
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaobing Zhou
> Attachments: HDFS-10408-HDFS-9924.000.patch
>
>
> HDFS-10224 and HDFS-10346 mostly test the batch style async request/response. 
> Out-of-order case should also be tested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10410) RedundantEditLogInputStream#LOG is set to wrong class

2016-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285128#comment-15285128
 ] 

Hadoop QA commented on HDFS-10410:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
1s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s 
{color} | {color:green} trunk passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: patch generated 0 
new + 13 unchanged - 1 fixed = 13 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 33s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 14s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 142m 34s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | hadoop.hdfs.TestRenameWhileOpen |
| JDK v1.7.0_101 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.qjournal.client.TestQuorumJournalManager |
|   | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Commented] (HDFS-10257) Quick Thread Local Storage set-up has a small flaw

2016-05-16 Thread Stephen Bovy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285003#comment-15285003
 ] 

Stephen Bovy commented on HDFS-10257:
-

 

So I tried using   >> cmake.<

 

And I got  >>  You must set the CMake variable GENERATED_JAVAH

 

So how to is use cmake on a stand-alone basis   (without maven)  to build 
libhdfs 

 

I guess I need a shell  script for set-up purposes ???   What do I need to set  
 ??

 

It should not be that difficult 

 

 

[root@sandbox src]# cmake .

-- The C compiler identification is GNU 4.4.7

-- The CXX compiler identification is GNU 4.4.7

-- Check for working C compiler: /usr/bin/cc

-- Check for working C compiler: /usr/bin/cc -- works

-- Detecting C compiler ABI info

-- Detecting C compiler ABI info - done

-- Check for working CXX compiler: /usr/bin/c++

-- Check for working CXX compiler: /usr/bin/c++ -- works

-- Detecting CXX compiler ABI info

-- Detecting CXX compiler ABI info - done

JAVA_HOME=, JAVA_JVM_LIBRARY=/usr/lib/jvm/java/jre/lib/amd64/server/libjvm.so

JAVA_INCLUDE_PATH=/usr/lib/jvm/java/include, 
JAVA_INCLUDE_PATH2=/usr/lib/jvm/java/include/linux

Located all JNI components successfully.

-- Performing Test HAVE_BETTER_TLS

-- Performing Test HAVE_BETTER_TLS - Success

-- Performing Test HAVE_INTEL_SSE_INTRINSICS

-- Performing Test HAVE_INTEL_SSE_INTRINSICS - Success

-- Looking for dlopen in dl

-- Looking for dlopen in dl - found

-- Found JNI: /usr/lib/jvm/java/jre/lib/amd64/libjawt.so  

 

CMake Error at CMakeLists.txt:84 (MESSAGE):

  You must set the CMake variable GENERATED_JAVAH

 

 

 

-- Configuring incomplete, errors occurred!

See also 
"/root/hadoop-2.7.2-src/hadoop-hdfs-project/hadoop-hdfs/src/CMakeFiles/CMakeOutput.log".

[root@sandbox src]#

 



> Quick Thread Local Storage set-up has a small flaw
> --
>
> Key: HDFS-10257
> URL: https://issues.apache.org/jira/browse/HDFS-10257
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.6.4
> Environment: Linux 
>Reporter: Stephen Bovy
>Priority: Minor
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> In   jni_helper.c   in the   getJNIEnvfunction 
> The “THREAD_LOCAL_STORAGE_SET_QUICK(env);”   Macro   is   in the  wrong 
> location;   
> It should precede   the  “threadLocalStorageSet(env)”   as follows ::  
> THREAD_LOCAL_STORAGE_SET_QUICK(env);
> if (threadLocalStorageSet(env)) {
>   return NULL;
> }
> AND IN   “thread_local_storage.h”   the macro:   
> “THREAD_LOCAL_STORAGE_SET_QUICK”
> should be as follows :: 
> #ifdef HAVE_BETTER_TLS
>   #define THREAD_LOCAL_STORAGE_GET_QUICK() \
> static __thread JNIEnv *quickTlsEnv = NULL; \
> { \
>   if (quickTlsEnv) { \
> return quickTlsEnv; \
>   } \
> }
>   #define THREAD_LOCAL_STORAGE_SET_QUICK(env) \
> { \
>   quickTlsEnv = (env); \
>   return env;
> }
> #else
>   #define THREAD_LOCAL_STORAGE_GET_QUICK()
>   #define THREAD_LOCAL_STORAGE_SET_QUICK(env)
> #endif



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10403) DiskBalancer: Add cancel command

2016-05-16 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10403:

Attachment: HDFS-10403-HDFS-1312.001.patch

Posting here for early code review, depends on QueryCommand Patch HDFS-10402


> DiskBalancer: Add cancel  command
> -
>
> Key: HDFS-10403
> URL: https://issues.apache.org/jira/browse/HDFS-10403
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: balancer & mover
>Affects Versions: HDFS-1312
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: HDFS-1312
>
> Attachments: HDFS-10403-HDFS-1312.001.patch
>
>
> Allows user to cancel an on-going disk balancing operation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9732) Remove DelegationTokenIdentifier.toString() —for better logging output

2016-05-16 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284924#comment-15284924
 ] 

Yongjun Zhang commented on HDFS-9732:
-

Thank you all [~steve_l], [~cnauroth] and [~aw]. Really appreciate it!

Thanks for offering to commit Steve, let me try out other branches, and I may 
just go ahead commit if I don't see issue to save some iteration. I will have 
to do it later today though.



> Remove DelegationTokenIdentifier.toString() —for better logging output
> --
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
> Attachments: HADOOP-12752-001.patch, HDFS-9732-000.patch, 
> HDFS-9732.001.patch, HDFS-9732.002.patch, HDFS-9732.003.patch, 
> HDFS-9732.004.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10411) libhdfs++: Incorrect parse of URIs from hdfs-site.xml

2016-05-16 Thread James Clampffer (JIRA)
James Clampffer created HDFS-10411:
--

 Summary: libhdfs++:  Incorrect parse of URIs from hdfs-site.xml
 Key: HDFS-10411
 URL: https://issues.apache.org/jira/browse/HDFS-10411
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer


It looks like the URI class confuses the host and scheme if the original URI 
didn't have a scheme.

Example from hdfs-site.xml.  Config generated using the cloudera MC.
{code}
dfs.namenode.servicerpc-address.nameservice1.namenode86
this-is-node-01.duder.com:8022
{code}

host = empty string
port = unset optional
scheme = this-is-node-01.duder.com





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10383) Safely close resources in DFSTestUtil

2016-05-16 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285043#comment-15285043
 ] 

Mingliang Liu commented on HDFS-10383:
--

Because of the recent JIRA update, I lost the permission to edit the 
description. I'd like to add one line to it, "Besides the try-with-resource, if 
a stream is not necessary, don't create/close it."

> Safely close resources in DFSTestUtil
> -
>
> Key: HDFS-10383
> URL: https://issues.apache.org/jira/browse/HDFS-10383
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10383.000.patch, HDFS-10383.001.patch, 
> HDFS-10383.002.patch, HDFS-10383.003.patch
>
>
> There are a few of methods in {{DFSTestUtil}} that do not close the resource 
> safely, or elegantly. We can use the try-with-resource statement to address 
> this problem.
> Specially, as {{DFSTestUtil}} is popularly used in test, we need to preserve 
> any exceptions thrown during the processing of the resource while still 
> guaranteeing it's closed finally. Take for example,the current implementation 
> of {{DFSTestUtil#createFile()}} closes the FSDataOutputStream in the 
> {{finally}} block, and when closing if the internal 
> {{DFSOutputStream#close()}} throws any exception, which it often does, the 
> exception thrown during the processing will be lost. See this [test 
> failure|https://builds.apache.org/job/PreCommit-HADOOP-Build/9320/testReport/org.apache.hadoop.hdfs/TestAsyncDFSRename/testAggressiveConcurrentAsyncRenameWithOverwrite/],
>  and we have to guess what was the root cause.
> Using try-with-resource, we can close the resources safely, and the 
> exceptions thrown both in processing and closing will be available (closing 
> exception will be suppressed).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10382) In WebHDFS numeric usernames do not work with DataNode

2016-05-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285184#comment-15285184
 ] 

Allen Wittenauer commented on HDFS-10382:
-

I'm very hesitant towards this patch because Bad Things(tm) happen if numeric 
user names ever need to be resolved at the system level. (hint: they aren't 
supported by POSIX.)

> In WebHDFS numeric usernames do not work with DataNode
> --
>
> Key: HDFS-10382
> URL: https://issues.apache.org/jira/browse/HDFS-10382
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: ramtin
>Assignee: ramtin
> Attachments: HADOOP-10382.patch
>
>
> Operations like {code:java}curl -i 
> -L"http://:/webhdfs/v1/?user.name=0123=OPEN"{code} that 
> directed to DataNode fail because of not reading the suggested domain pattern 
> from the configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10410) RedundantEditLogInputStream#LOG is set to wrong class

2016-05-16 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10410:
--
Attachment: (was: HADOOP-10410.001.patch)

> RedundantEditLogInputStream#LOG is set to wrong class
> -
>
> Key: HDFS-10410
> URL: https://issues.apache.org/jira/browse/HDFS-10410
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
>
> Found the issue while analyzing a log message that points to the wrong class.
> {code}
> class RedundantEditLogInputStream extends EditLogInputStream {
>   public static final Log LOG = 
> LogFactory.getLog(EditLogInputStream.class.getName());
> {code}
> should be changed to:
> {code}
>   public static final Log LOG = 
> LogFactory.getLog(RedundantEditLogInputStream.class.getName());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10411) libhdfs++: Incorrect parse of URIs from hdfs-site.xml

2016-05-16 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285198#comment-15285198
 ] 

James Clampffer commented on HDFS-10411:


I can't edit to add to the description for some reason.. But just to see what 
all the getters output when the URI object is in this state
{code}
uri.str() = " this-is-node-01.duder.com:///8020"
uri.get_scheme() = " this-is-node-01.duder.com"
uri.get_host() = ""
uri.get_port() = unset optional
uri.get_path() = "/8020"
uri.get_fragment() = ""
uri.get_query_elements = an empty vector
{code}

I'm guessing a URI without a scheme isn't technically a URI so this is 
undefined behavior or something (or we aren't checking some error flag).

> libhdfs++:  Incorrect parse of URIs from hdfs-site.xml
> --
>
> Key: HDFS-10411
> URL: https://issues.apache.org/jira/browse/HDFS-10411
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: James Clampffer
>
> It looks like the URI class confuses the host and scheme if the original URI 
> didn't have a scheme.
> Example from hdfs-site.xml.  Config generated using the cloudera MC.
> {code}
> dfs.namenode.servicerpc-address.nameservice1.namenode86
> this-is-node-01.duder.com:8022
> {code}
> host = empty string
> port = unset optional
> scheme = this-is-node-01.duder.com



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10408) Add tests for out-of-order asynchronous rename/setPermission/setOwner

2016-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285290#comment-15285290
 ] 

Hadoop QA commented on HDFS-10408:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
26s {color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: patch generated 0 
new + 4 unchanged - 1 fixed = 4 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 28s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 27s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 166m 29s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | 
hadoop.hdfs.server.datanode.TestLargeBlockReport |
|   | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot |
|   | hadoop.hdfs.TestAsyncDFSRename |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804222/HDFS-10408-HDFS-9924.000.patch
 |
| JIRA Issue | HDFS-10408 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  

[jira] [Commented] (HDFS-10257) Quick Thread Local Storage set-up has a small flaw

2016-05-16 Thread Stephen Bovy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285256#comment-15285256
 ] 

Stephen Bovy commented on HDFS-10257:
-

Hi Chris;

I got the build to work by disabling the error message for   :  GENERATED_JAVAH
And commenting out its usage

find_package(JNI REQUIRED)
if (NOT GENERATED_JAVAH)
# Must identify where the generated headers have been placed
#MESSAGE(FATAL_ERROR "You must set the CMake variable GENERATED_JAVAH")
endif (NOT GENERATED_JAVAH)

include_directories(
#${GENERATED_JAVAH}
${CMAKE_CURRENT_SOURCE_DIR}
${CMAKE_BINARY_DIR}
${JNI_INCLUDE_DIRS}
main/native
main/native/libhdfs
${OS_DIR}
)

I would like to find out what this variable is for?
Is this something that has to be pre-defined  OR  is it something that is 
supposed to be created by the build process?  


From: Steve and Grace Bovy [mailto:sg.b...@ca.rr.com] 
Sent: Monday, May 16, 2016 11:24 AM
To: 'Chris Nauroth (JIRA)' 
Subject: RE: [jira] [Commented] (HDFS-10257) Quick Thread Local Storage set-up 
has a small flaw


So I tried using   >> cmake.<

And I got  >>  You must set the CMake variable GENERATED_JAVAH

So how to is use cmake on a stand-alone basis   (without maven)  to build 
libhdfs 

I guess I need a shell  script for set-up purposes ???   What do I need to set  
 ??

It should not be that difficult 


[root@sandbox src]# cmake .
-- The C compiler identification is GNU 4.4.7
-- The CXX compiler identification is GNU 4.4.7
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
JAVA_HOME=, JAVA_JVM_LIBRARY=/usr/lib/jvm/java/jre/lib/amd64/server/libjvm.so
JAVA_INCLUDE_PATH=/usr/lib/jvm/java/include, 
JAVA_INCLUDE_PATH2=/usr/lib/jvm/java/include/linux
Located all JNI components successfully.
-- Performing Test HAVE_BETTER_TLS
-- Performing Test HAVE_BETTER_TLS - Success
-- Performing Test HAVE_INTEL_SSE_INTRINSICS
-- Performing Test HAVE_INTEL_SSE_INTRINSICS - Success
-- Looking for dlopen in dl
-- Looking for dlopen in dl - found
-- Found JNI: /usr/lib/jvm/java/jre/lib/amd64/libjawt.so  

CMake Error at CMakeLists.txt:84 (MESSAGE):
  You must set the CMake variable GENERATED_JAVAH



-- Configuring incomplete, errors occurred!
See also 
"/root/hadoop-2.7.2-src/hadoop-hdfs-project/hadoop-hdfs/src/CMakeFiles/CMakeOutput.log".
[root@sandbox src]#



> Quick Thread Local Storage set-up has a small flaw
> --
>
> Key: HDFS-10257
> URL: https://issues.apache.org/jira/browse/HDFS-10257
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.6.4
> Environment: Linux 
>Reporter: Stephen Bovy
>Priority: Minor
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> In   jni_helper.c   in the   getJNIEnvfunction 
> The “THREAD_LOCAL_STORAGE_SET_QUICK(env);”   Macro   is   in the  wrong 
> location;   
> It should precede   the  “threadLocalStorageSet(env)”   as follows ::  
> THREAD_LOCAL_STORAGE_SET_QUICK(env);
> if (threadLocalStorageSet(env)) {
>   return NULL;
> }
> AND IN   “thread_local_storage.h”   the macro:   
> “THREAD_LOCAL_STORAGE_SET_QUICK”
> should be as follows :: 
> #ifdef HAVE_BETTER_TLS
>   #define THREAD_LOCAL_STORAGE_GET_QUICK() \
> static __thread JNIEnv *quickTlsEnv = NULL; \
> { \
>   if (quickTlsEnv) { \
> return quickTlsEnv; \
>   } \
> }
>   #define THREAD_LOCAL_STORAGE_SET_QUICK(env) \
> { \
>   quickTlsEnv = (env); \
>   return env;
> }
> #else
>   #define THREAD_LOCAL_STORAGE_GET_QUICK()
>   #define THREAD_LOCAL_STORAGE_SET_QUICK(env)
> #endif



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-16 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10390:
-
Attachment: HDFS-10390-HDFS-9924.001.patch

> Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
> 
>
> Key: HDFS-10390
> URL: https://issues.apache.org/jira/browse/HDFS-10390
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10390-HDFS-9924.000.patch, 
> HDFS-10390-HDFS-9924.001.patch
>
>
> This is proposed to implement asynchronous setAcl/getAclStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10397) Distcp should ignore -delete option if -diff option is provided instead of exiting

2016-05-16 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285308#comment-15285308
 ] 

Jing Zhao commented on HDFS-10397:
--

Thanks for updating the patch, Mingliang. The current patch looks good to me in 
general. One minor:
{code}
580 if (useDiff && deleteMissing) {
581   // -delete and -diff are mutually exclusive. For backward 
compatibility,
582   // we ignore the -delete option here, instead of throwing an
583   // IllegalArgumentException. See HDFS-10397 for more discussion.
584   OptionsParser.LOG.warn("-delete and -diff are mutually exclusive. 
" +
585   "The -delete option will be ignored.");
586   setDeleteMissing(false);
587 }
{code}

This logic should be moved to the beginning of the {{valid}} method. I.e., the 
"-deleteMissing" should be bypassed before it causes any exception along with 
some other option settings. A new unit test will also be helpful for this 
scenario.

+1 after addressing the comment.

> Distcp should ignore -delete option if -diff option is provided instead of 
> exiting
> --
>
> Key: HDFS-10397
> URL: https://issues.apache.org/jira/browse/HDFS-10397
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10397.000.patch, HDFS-10397.001.patch, 
> HDFS-10397.002.patch
>
>
> In distcp, {{-delete}} and {{-diff}} options are mutually exclusive. 
> [HDFS-8828] brought strictly checking which makes the existing applications 
> (or scripts) that work just fine with both {{-delete}} and {{-diff}} options 
> previously stop performing because of the 
> {{java.lang.IllegalArgumentException: Diff is valid only with update 
> options}} exception.
> To make it backward incompatible, we can ignore the {{-delete}} option, given 
> {{-diff}} option, instead of exiting the program. Along with that, we can 
> print a warning message saying that _Diff is valid only with update options, 
> and -delete option is ignored_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-16 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285275#comment-15285275
 ] 

Xiaobing Zhou commented on HDFS-10390:
--

v001 patch is posted, it
1. fixed an issue in test, e.g. TestAsyncDFS#internalTestBatchAsyncAcl
{code}
143 waitForReturnValues(setAclFutureQueue);
144 setAclFutureQueue.clear();

158 waitForReturnValues(getAclFutureQueue, expectedAclSpec, 
cluster, fs);
159 getAclFutureQueue.clear();
{code}

2. removed some changes in TestAsyncDFSRename since HDFS-10408 addressed them.

> Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
> 
>
> Key: HDFS-10390
> URL: https://issues.apache.org/jira/browse/HDFS-10390
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10390-HDFS-9924.000.patch, 
> HDFS-10390-HDFS-9924.001.patch
>
>
> This is proposed to implement asynchronous setAcl/getAclStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10410) RedundantEditLogInputStream#LOG is set to wrong class

2016-05-16 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10410:
--
Attachment: HDFS-10410.001.patch

Upload the correct patch file name.

> RedundantEditLogInputStream#LOG is set to wrong class
> -
>
> Key: HDFS-10410
> URL: https://issues.apache.org/jira/browse/HDFS-10410
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-10410.001.patch
>
>
> Found the issue while analyzing a log message that points to the wrong class.
> {code}
> class RedundantEditLogInputStream extends EditLogInputStream {
>   public static final Log LOG = 
> LogFactory.getLog(EditLogInputStream.class.getName());
> {code}
> should be changed to:
> {code}
>   public static final Log LOG = 
> LogFactory.getLog(RedundantEditLogInputStream.class.getName());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285721#comment-15285721
 ] 

Hadoop QA commented on HDFS-10390:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
0s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 10s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 55s 
{color} | {color:green} trunk passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 20s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 19s 
{color} | {color:green} trunk passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
1s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 45s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 49s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 27s 
{color} | {color:red} root: patch generated 3 new + 418 unchanged - 0 fixed = 
421 total (was 418) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 18s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 37s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 53s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 58s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 49s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 59s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 8s {color} 
| {color:red} hadoop-hdfs 

[jira] [Commented] (HDFS-10208) Addendum for HDFS-9579: to handle the case when client machine can't resolve network path

2016-05-16 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285758#comment-15285758
 ] 

Sangjin Lee commented on HDFS-10208:


Sorry for the delay [~mingma]. I am +1. Unless I hear additional comments (cc 
[~brahmareddy]), I'll commit it this evening. Let me know which versions this 
should be committed to other than trunk and branch-2 (2.9.0).

> Addendum for HDFS-9579: to handle the case when client machine can't resolve 
> network path
> -
>
> Key: HDFS-10208
> URL: https://issues.apache.org/jira/browse/HDFS-10208
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-10208-2.patch, HDFS-10208-3.patch, 
> HDFS-10208-4.patch, HDFS-10208-5.patch, HDFS-10208.patch
>
>
> If DFSClient runs on a machine that can't resolve network path, 
> {{DNSToSwitchMapping}} will return {{DEFAULT_RACK}}. In addition, if somehow 
> {{dnsToSwitchMapping.resolve}} returns null, that will cause exception when 
> it tries to create {{clientNode}}. In either case, there is no need to create 
> {{clientNode}} and we should treat its network distance with any datanode as 
> Integer.MAX_VALUE.
> {noformat}
> clientNode = new NodeBase(clientHostName,
> dnsToSwitchMapping.resolve(nodes).get(0));
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7240) Object store in HDFS

2016-05-16 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285372#comment-15285372
 ] 

Andrew Wang commented on HDFS-7240:
---

Hi all, I had the opportunity to hear more about Ozone at Apache Big Data, and 
chatted with Anu afterwards. Quite interesting, I learned a lot. Thanks Anu for 
the presentation and fielding my questions.

I'm re-posting my notes and questions here. Anu said he'd be posting a new 
design doc soon to address my questions.

Notes:

* Key Space Manager and Storage Container Manager are the "master" services in 
Ozone, and are the equivalent of FSNamesystem and the BlockManager in HDFS. 
Both are Raft-replicated services. There is a new Raft implementation being 
worked on internally.
* The block container abstraction is a mutable range of KV pairs. It's 
essentially a ~5GB LevelDB for metadata + on-disk files for the data. Container 
metadata is replicated via Raft. Container data is replicated via chain 
replication.
* Since containers are mutable and the replicas are independent, the on-disk 
state will be different. This means we need to do logical rather than physical 
replication.
* Container data is stored as chunks, where a chunk is maybe 4-8MB. Chunks are 
immutable. Chunks are a (file, offset, length) triplet. Currently each chunk is 
stored as a separate file.
* Use of copysets to reduce the risk of data loss due to independent node 
failures.

Questions:

* My biggest concern is that erasure coding is not a first-class consideration 
in this system, and seems like it will be quite difficult to implement. EC is 
table stakes in the blobstore world, it's implemented by all the cloud 
blobstores I'm aware of (S3, WASB, etc). Since containers are mutable, we are 
not able to erasure-code containers together, else we suffer from the 
equivalent of the RAID-5 write hole. It's the same issue we're dealing with on 
HDFS-7661 for hflush/hsync EC support. There's also the complexity that a 
container is replicated to 3 nodes via Raft, but EC data is typically stored 
across 14 nodes.
* Since LevelDB is being used for metadata storage and separately being 
replicated via Raft, are there concerns about metadata write amplification?
* Can we re-use the QJM code instead of writing a new replicated log 
implementation? QJM is battle-tested, and consensus is a known hard problem to 
get right.
* Are there concerns about storing millions of chunk files per disk? Writing 
each chunk as a separate file requires more metadata ops and fsyncs than 
appending to a file. We also need to be very careful to never require a full 
scan of the filesystem. The HDFS DN does full scans right now (DU, volume 
scanner).
* Any thoughts about how we go about packing multiple chunks into a larger file?
* Merges and splits of containers. We need nice large 5GB containers to hit the 
SCM scalability targets. However, I think we're going to have a harder time 
with this than a system like HBase. HDFS sees a relatively high delete rate for 
recently written data, e.g. intermediate data in a processing pipeline. HDFS 
also sees a much higher variance in key/value size. Together, these factors 
mean Ozone will likely be doing many more merges and splits than HBase to keep 
the container size high. This is concerning since splits and merges are 
expensive operations, and based on HBase's experience, are hard to get right.
* What kind of sharing do we get with HDFS, considering that HDFS doesn't use 
block containers, and the metadata services are separate from the NN? not 
shared?
* Any thoughts on how we will transition applications like Hive and HBase to 
Ozone? These apps use rename and directories for synchronization, which are not 
possible on Ozone.
* Have you experienced data loss from independent node failures, thus 
motivating the need for copysets? I think the idea is cool, but the RAMCloud 
network hardware expectations are quite different from ours. Limiting the set 
of nodes for re-replication means you have less flexibility to avoid 
top-of-rack switches and decreased parallelism. It's also not clear how this 
type of data placement meshes with EC, or the other quite sophisticated types 
of block placement we currently support in HDFS.
* How do you plan to handle files larger than 5GB? Large files right now are 
also not spread across multiple nodes and disks, limiting IO performance.
* Are all reads and writes served by the container's Raft master? IIUC that's 
how you get strong consistency, but it means we don't have the same performance 
benefits we have now in HDFS from 3-node replication.

I also ask that more of this information and decision making be shared on 
public mailing lists and JIRA. The KSM is not mentioned in the architecture 
document, nor the fact that the Ozone metadata is being replicated via Raft 
rather than stored in containers. I not aware that there is already progress 

[jira] [Commented] (HDFS-10383) Safely close resources in DFSTestUtil

2016-05-16 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285494#comment-15285494
 ] 

Arpit Agarwal commented on HDFS-10383:
--

Then it Looks like certain updates are being restricted to committer roles. 
Sorry about that.

> Safely close resources in DFSTestUtil
> -
>
> Key: HDFS-10383
> URL: https://issues.apache.org/jira/browse/HDFS-10383
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10383.000.patch, HDFS-10383.001.patch, 
> HDFS-10383.002.patch, HDFS-10383.003.patch
>
>
> There are a few of methods in {{DFSTestUtil}} that do not close the resource 
> safely, or elegantly. We can use the try-with-resource statement to address 
> this problem.
> Specially, as {{DFSTestUtil}} is popularly used in test, we need to preserve 
> any exceptions thrown during the processing of the resource while still 
> guaranteeing it's closed finally. Take for example,the current implementation 
> of {{DFSTestUtil#createFile()}} closes the FSDataOutputStream in the 
> {{finally}} block, and when closing if the internal 
> {{DFSOutputStream#close()}} throws any exception, which it often does, the 
> exception thrown during the processing will be lost. See this [test 
> failure|https://builds.apache.org/job/PreCommit-HADOOP-Build/9320/testReport/org.apache.hadoop.hdfs/TestAsyncDFSRename/testAggressiveConcurrentAsyncRenameWithOverwrite/],
>  and we have to guess what was the root cause.
> Using try-with-resource, we can close the resources safely, and the 
> exceptions thrown both in processing and closing will be available (closing 
> exception will be suppressed). Besides the try-with-resource, if a stream is 
> not necessary, don't create/close it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10412) Add optional non-thread support for better perfromance

2016-05-16 Thread Stephen Bovy (JIRA)
Stephen Bovy created HDFS-10412:
---

 Summary: Add optional non-thread support for better perfromance
 Key: HDFS-10412
 URL: https://issues.apache.org/jira/browse/HDFS-10412
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs
Reporter: Stephen Bovy
Priority: Minor


I would like to propose some simple optional changes that would be activated by 
a compiler flag.  These changes would enable a typical monolithic single 
threaded application to use libhdfs without the need to incur additional 
overhead for threads that is not needed. 

Here is a proposed for these changes:

#1 add a new function to "jni_helper.c"

/** getSoloJNIEnv: A helper function to get the JNIEnv* for non-threaded use.
 * If no JVM exists, then one will be created. JVM command line arguments
 * are obtained from the LIBHDFS_OPTS environment variable.
 * @param: None.
 * @return The JNIEnv* for non-thread use.
 * */
JNIEnv* getSoloJNIEnv(void)
{
JNIEnv *env;
env = getGlobalJNIEnv();
if (!env) {
  fprintf(stderr, "getJNIEnv: getGlobalJNIEnv failed\n");
  return NULL;
}
return env;
}

Add the following the following to "hdfs.c"

#1 a static global variable : "JNIEnv* hdfsJNIEnv;"
#2 a MACRO:  "GETJNIENV();"

#ifdef NOTHREAD
static JNIEnv* hdfsJNIEnv = NULL;
#define GETJNIENV()   \
if (hdfsJNIEnv == NULL) { \
hdfsJNIEnv = getSoloJNIEnv(); \
} \
env = hdfsJNIEnv; 
#else
#define  GETJNIENV() env = getJNIEnv(); 
#endif

The above NEW MACRO  would be used as in the following example:

int hdfsFileGetReadStatistics(hdfsFile file,
  struct hdfsReadStatistics **stats)
{

jthrowable jthr;
jobject readStats = NULL;
jvalue jVal;
struct hdfsReadStatistics *s = NULL;
int ret;
JNIEnv* env;
//  JNIEnv* env = getJNIEnv();
GETJNIENV();

( ... )

}






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-16 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10390:
-
Attachment: HDFS-10390-HDFS-9924.002.patch

> Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
> 
>
> Key: HDFS-10390
> URL: https://issues.apache.org/jira/browse/HDFS-10390
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10390-HDFS-9924.000.patch, 
> HDFS-10390-HDFS-9924.001.patch, HDFS-10390-HDFS-9924.002.patch
>
>
> This is proposed to implement asynchronous setAcl/getAclStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access

2016-05-16 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285655#comment-15285655
 ] 

Xiaobing Zhou commented on HDFS-9924:
-

I filed a Jira HDFS-10413 for this, thank you [~mingma].

> [umbrella] Asynchronous HDFS Access
> ---
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support asynchronous calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10363) Ozone: Introduce new config keys for SCM service endpoints

2016-05-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10363:
-
Summary: Ozone: Introduce new config keys for SCM service endpoints  (was: 
Ozone: Introduce new config keys for SCM service addresses)

> Ozone: Introduce new config keys for SCM service endpoints
> --
>
> Key: HDFS-10363
> URL: https://issues.apache.org/jira/browse/HDFS-10363
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: OzoneScmEndpointconfiguration.pdf
>
>
> The SCM should have its own config keys to specify service addresses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10397) Distcp should ignore -delete option if -diff option is provided instead of exiting

2016-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285612#comment-15285612
 ] 

Hadoop QA commented on HDFS-10397:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 8s 
{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 9s 
{color} | {color:red} hadoop-distcp in trunk failed with JDK v1.7.0_95. {color} 
|
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 9s 
{color} | {color:red} hadoop-distcp in trunk failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 9s 
{color} | {color:red} hadoop-distcp in trunk failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 9s 
{color} | {color:red} hadoop-distcp in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 8s 
{color} | {color:red} hadoop-distcp in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 8s {color} 
| {color:red} hadoop-distcp in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} hadoop-tools/hadoop-distcp: patch generated 0 new + 
76 unchanged - 11 fixed = 76 total (was 87) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 10s 
{color} | {color:red} hadoop-distcp in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 8s 
{color} | {color:red} hadoop-distcp in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 4s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 9s {color} | 
{color:red} hadoop-distcp in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 39s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804277/HDFS-10397.003.patch |
| JIRA Issue | HDFS-10397 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a30829ef71b4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDFS-10383) Safely close resources in DFSTestUtil

2016-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285382#comment-15285382
 ] 

Hadoop QA commented on HDFS-10383:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: patch generated 0 
new + 103 unchanged - 25 fixed = 103 total (was 128) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 2s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 69m 46s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 168m 19s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804228/HDFS-10383.003.patch |
| JIRA Issue | HDFS-10383 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5ff5724d4994 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Updated] (HDFS-10397) Distcp should ignore -delete option if -diff option is provided instead of exiting

2016-05-16 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10397:
-
Attachment: HDFS-10397.003.patch

Thanks [~jingzhao] for the review.

It makes sense to move the ignoring {{-deleteMissing}} logic ahead of its 
validation. If we ignore it, i.e. by resetting its value, any exception that is 
caused by its old value should be bypassed.

As to the test, it's kinda not ideal. Basically the {{-diff}} will only work 
with {{-update}} option, and {{-delete}} needs either {{-update}} or 
{{-overwrite}} option. So if a valid {{-diff}} option makes {{-delete}} 
ignored, there should be an {{-update}} option along with it. In this case, the 
{{-delete}} option will not cause exception as {{-update}} is there. As the 
best effort I can tell, we can test the case where only {{-delete}} and 
{{-update}} options are provided. The {{-delete}} option will be ignored and it 
cannot cause any validation exception. But the validation still fails as the 
{{-diff}} option is not happy w/o an {{-update}} option.

See the v3 patch [^HDFS-10397.003.patch].

> Distcp should ignore -delete option if -diff option is provided instead of 
> exiting
> --
>
> Key: HDFS-10397
> URL: https://issues.apache.org/jira/browse/HDFS-10397
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10397.000.patch, HDFS-10397.001.patch, 
> HDFS-10397.002.patch, HDFS-10397.003.patch
>
>
> In distcp, {{-delete}} and {{-diff}} options are mutually exclusive. 
> [HDFS-8828] brought strictly checking which makes the existing applications 
> (or scripts) that work just fine with both {{-delete}} and {{-diff}} options 
> previously stop performing because of the 
> {{java.lang.IllegalArgumentException: Diff is valid only with update 
> options}} exception.
> To make it backward incompatible, we can ignore the {{-delete}} option, given 
> {{-diff}} option, instead of exiting the program. Along with that, we can 
> print a warning message saying that _Diff is valid only with update options, 
> and -delete option is ignored_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7240) Object store in HDFS

2016-05-16 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285579#comment-15285579
 ] 

Anu Engineer commented on HDFS-7240:


[~andrew.wang] Thank you for showing up at the talk and having a very 
interesting follow up conversation. I am glad that we are continuing that on 
this JIRA. Just to make sure that others who might read our discussion gets the 
right context, Here are the slides from the talk 

[http://schd.ws/hosted_files/apachebigdata2016/fc/Hadoop%20Object%20Store%20-%20Ozone.pdf]




> Object store in HDFS
> 
>
> Key: HDFS-7240
> URL: https://issues.apache.org/jira/browse/HDFS-7240
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: Ozone-architecture-v1.pdf, ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10397) Distcp should ignore -delete option if -diff option is provided instead of exiting

2016-05-16 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285581#comment-15285581
 ] 

Jing Zhao commented on HDFS-10397:
--

+1 on the 003 patch. I will hold the commit to see if [~yzhangal] has further 
comments.

> Distcp should ignore -delete option if -diff option is provided instead of 
> exiting
> --
>
> Key: HDFS-10397
> URL: https://issues.apache.org/jira/browse/HDFS-10397
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10397.000.patch, HDFS-10397.001.patch, 
> HDFS-10397.002.patch, HDFS-10397.003.patch
>
>
> In distcp, {{-delete}} and {{-diff}} options are mutually exclusive. 
> [HDFS-8828] brought strictly checking which makes the existing applications 
> (or scripts) that work just fine with both {{-delete}} and {{-diff}} options 
> previously stop performing because of the 
> {{java.lang.IllegalArgumentException: Diff is valid only with update 
> options}} exception.
> To make it backward incompatible, we can ignore the {{-delete}} option, given 
> {{-diff}} option, instead of exiting the program. Along with that, we can 
> print a warning message saying that _Diff is valid only with update options, 
> and -delete option is ignored_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-16 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285599#comment-15285599
 ] 

Xiaobing Zhou commented on HDFS-10390:
--

v002 patch added some tests for out-of-order retrievals of responses, see also 
#testConservativeOutOfOrderReponseSetGetAcl and 
#testAggressiveOutOfOrderReponseSetGetAcl.


> Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
> 
>
> Key: HDFS-10390
> URL: https://issues.apache.org/jira/browse/HDFS-10390
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10390-HDFS-9924.000.patch, 
> HDFS-10390-HDFS-9924.001.patch, HDFS-10390-HDFS-9924.002.patch
>
>
> This is proposed to implement asynchronous setAcl/getAclStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10397) Distcp should ignore -delete option if -diff option is provided instead of exiting

2016-05-16 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285629#comment-15285629
 ] 

Mingliang Liu commented on HDFS-10397:
--

Thanks [~jingzhao] for the review. The latest build and test failures are 
caused by some test env misconfiguration when switching to Java 8 (see 
[HADOOP-11858]). Let's trigger Jenkins after that is fixed.

> Distcp should ignore -delete option if -diff option is provided instead of 
> exiting
> --
>
> Key: HDFS-10397
> URL: https://issues.apache.org/jira/browse/HDFS-10397
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10397.000.patch, HDFS-10397.001.patch, 
> HDFS-10397.002.patch, HDFS-10397.003.patch
>
>
> In distcp, {{-delete}} and {{-diff}} options are mutually exclusive. 
> [HDFS-8828] brought strictly checking which makes the existing applications 
> (or scripts) that work just fine with both {{-delete}} and {{-diff}} options 
> previously stop performing because of the 
> {{java.lang.IllegalArgumentException: Diff is valid only with update 
> options}} exception.
> To make it backward incompatible, we can ignore the {{-delete}} option, given 
> {{-diff}} option, instead of exiting the program. Along with that, we can 
> print a warning message saying that _Diff is valid only with update options, 
> and -delete option is ignored_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10410) RedundantEditLogInputStream#LOG is set to wrong class

2016-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285693#comment-15285693
 ] 

Hadoop QA commented on HDFS-10410:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} trunk passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s 
{color} | {color:green} trunk passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: patch generated 0 
new + 13 unchanged - 1 fixed = 13 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 28s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 43s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 144m 2s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.server.datanode.TestDataNodeLifeline |
| JDK v1.7.0_101 Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804257/HDFS-10410.001.patch |
| JIRA Issue | HDFS-10410 |
| Optional Tests |  asflicense  compile  javac  

[jira] [Commented] (HDFS-10383) Safely close resources in DFSTestUtil

2016-05-16 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285486#comment-15285486
 ] 

Mingliang Liu commented on HDFS-10383:
--

Test failure is not related.

> Safely close resources in DFSTestUtil
> -
>
> Key: HDFS-10383
> URL: https://issues.apache.org/jira/browse/HDFS-10383
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10383.000.patch, HDFS-10383.001.patch, 
> HDFS-10383.002.patch, HDFS-10383.003.patch
>
>
> There are a few of methods in {{DFSTestUtil}} that do not close the resource 
> safely, or elegantly. We can use the try-with-resource statement to address 
> this problem.
> Specially, as {{DFSTestUtil}} is popularly used in test, we need to preserve 
> any exceptions thrown during the processing of the resource while still 
> guaranteeing it's closed finally. Take for example,the current implementation 
> of {{DFSTestUtil#createFile()}} closes the FSDataOutputStream in the 
> {{finally}} block, and when closing if the internal 
> {{DFSOutputStream#close()}} throws any exception, which it often does, the 
> exception thrown during the processing will be lost. See this [test 
> failure|https://builds.apache.org/job/PreCommit-HADOOP-Build/9320/testReport/org.apache.hadoop.hdfs/TestAsyncDFSRename/testAggressiveConcurrentAsyncRenameWithOverwrite/],
>  and we have to guess what was the root cause.
> Using try-with-resource, we can close the resources safely, and the 
> exceptions thrown both in processing and closing will be available (closing 
> exception will be suppressed). Besides the try-with-resource, if a stream is 
> not necessary, don't create/close it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9924) [umbrella] Asynchronous HDFS Access

2016-05-16 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285597#comment-15285597
 ] 

Ming Ma commented on HDFS-9924:
---

Some general comments: When MR ran into similar performance issues, it either 
revised how it uses HDFS as in MAPREDUCE-6336, or it used multiple threads as 
in MAPREDUCE-2349. The multiple threads approach might work well for some 
scenarios, but might not be desirable if it is launched inside a YARN container 
where there could be other containers on the same machine. On that note, it 
might be useful to provide async support for listStatus as well to simplify 
MAPREDUCE-2349.

> [umbrella] Asynchronous HDFS Access
> ---
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support asynchronous calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10413) Implement asynchronous listStatus for DistributedFileSystem

2016-05-16 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HDFS-10413:


 Summary: Implement asynchronous listStatus for 
DistributedFileSystem
 Key: HDFS-10413
 URL: https://issues.apache.org/jira/browse/HDFS-10413
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaobing Zhou


Per the 
[comment|https://issues.apache.org/jira/browse/HDFS-9924?focusedCommentId=15285597=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15285597]
 from [~mingma], this Jira tracks efforts of implementing async listStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285678#comment-15285678
 ] 

Hadoop QA commented on HDFS-10390:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 7s 
{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 34s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 10s 
{color} | {color:red} root in trunk failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
29s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-common in trunk failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-hdfs in trunk failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 11s 
{color} | {color:red} hadoop-hdfs-client in trunk failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
39s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-common in trunk failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 11s 
{color} | {color:red} hadoop-hdfs in trunk failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 11s 
{color} | {color:red} hadoop-hdfs-client in trunk failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 19s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 19s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 32s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 10s 
{color} | {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 10s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 30s 
{color} | {color:red} root: patch generated 6 new + 418 unchanged - 0 fixed = 
424 total (was 418) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} 

[jira] [Commented] (HDFS-10410) RedundantEditLogInputStream#LOG is set to wrong class

2016-05-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285698#comment-15285698
 ] 

Hudson commented on HDFS-10410:
---

FAILURE: Integrated in Hadoop-trunk-Commit #9771 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9771/])
HDFS-10410. RedundantEditLogInputStream.LOG is set to wrong class. (John (lei: 
rev 6a6e74acf5c38a4995c4622148721cfe2f1fbdad)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/RedundantEditLogInputStream.java


> RedundantEditLogInputStream#LOG is set to wrong class
> -
>
> Key: HDFS-10410
> URL: https://issues.apache.org/jira/browse/HDFS-10410
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-10410.001.patch
>
>
> Found the issue while analyzing a log message that points to the wrong class.
> {code}
> class RedundantEditLogInputStream extends EditLogInputStream {
>   public static final Log LOG = 
> LogFactory.getLog(EditLogInputStream.class.getName());
> {code}
> should be changed to:
> {code}
>   public static final Log LOG = 
> LogFactory.getLog(RedundantEditLogInputStream.class.getName());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10410) RedundantEditLogInputStream#LOG is set to wrong class

2016-05-16 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-10410:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha1
   2.9.0
   Status: Resolved  (was: Patch Available)

The change is trivial. It provides more clear context for the log messages.

+1. Thanks a lot, [~jzhuge]!

> RedundantEditLogInputStream#LOG is set to wrong class
> -
>
> Key: HDFS-10410
> URL: https://issues.apache.org/jira/browse/HDFS-10410
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HDFS-10410.001.patch
>
>
> Found the issue while analyzing a log message that points to the wrong class.
> {code}
> class RedundantEditLogInputStream extends EditLogInputStream {
>   public static final Log LOG = 
> LogFactory.getLog(EditLogInputStream.class.getName());
> {code}
> should be changed to:
> {code}
>   public static final Log LOG = 
> LogFactory.getLog(RedundantEditLogInputStream.class.getName());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10383) Safely close resources in DFSTestUtil

2016-05-16 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285485#comment-15285485
 ] 

Mingliang Liu commented on HDFS-10383:
--

Thanks [~arpitagarwal] for taking care of it. Sorry I still got no luck, even 
after log out and re-log in.

> Safely close resources in DFSTestUtil
> -
>
> Key: HDFS-10383
> URL: https://issues.apache.org/jira/browse/HDFS-10383
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10383.000.patch, HDFS-10383.001.patch, 
> HDFS-10383.002.patch, HDFS-10383.003.patch
>
>
> There are a few of methods in {{DFSTestUtil}} that do not close the resource 
> safely, or elegantly. We can use the try-with-resource statement to address 
> this problem.
> Specially, as {{DFSTestUtil}} is popularly used in test, we need to preserve 
> any exceptions thrown during the processing of the resource while still 
> guaranteeing it's closed finally. Take for example,the current implementation 
> of {{DFSTestUtil#createFile()}} closes the FSDataOutputStream in the 
> {{finally}} block, and when closing if the internal 
> {{DFSOutputStream#close()}} throws any exception, which it often does, the 
> exception thrown during the processing will be lost. See this [test 
> failure|https://builds.apache.org/job/PreCommit-HADOOP-Build/9320/testReport/org.apache.hadoop.hdfs/TestAsyncDFSRename/testAggressiveConcurrentAsyncRenameWithOverwrite/],
>  and we have to guess what was the root cause.
> Using try-with-resource, we can close the resources safely, and the 
> exceptions thrown both in processing and closing will be available (closing 
> exception will be suppressed). Besides the try-with-resource, if a stream is 
> not necessary, don't create/close it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10363) Ozone: Introduce new config keys for SCM service endpoints

2016-05-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10363:
-
Attachment: ozone-site.xml
HDFS-10363.01.patch

v01 patch. After this patch you will require an ozone-site.xml to test Ozone. I 
am attaching the minimal ozone-site.xml file I used for testing. I did manual 
testing to verify Ozone REST commands are working as expected.

You still need to start then stop the NameNode before starting the SCM. The 
coexistence requires additional fixes.

Also this patch breaks existing Ozone tests that are using MiniDFSCluster. They 
need to be fixed to use MiniOzoneCluster which I think needs some more fixes. 
To keep this patch tractable I will fix the tests separately.

> Ozone: Introduce new config keys for SCM service endpoints
> --
>
> Key: HDFS-10363
> URL: https://issues.apache.org/jira/browse/HDFS-10363
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-10363.01.patch, OzoneScmEndpointconfiguration.pdf, 
> ozone-site.xml
>
>
> The SCM should have its own config keys to specify service addresses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7240) Object store in HDFS

2016-05-16 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285948#comment-15285948
 ] 

Anu Engineer commented on HDFS-7240:


[~andrew.wang] Thank you for your comments, They are well thought out and 
extremely valuable questions. I will make sure that all areas that you are 
asking about is discussed in the next update of design doc.

bq. Anu said he'd be posting a new design doc soon to address my questions.
 
I am working on that, but just to make sure your questions are not lost in the 
big picture of the design doc, I am answering them individually here.
 
bq. My biggest concern is that erasure coding is not a first-class 
consideration in this system.
Nothing in ozone prevents a chunk being EC encoded. In fact ozone makes no 
assumptions about the location or the types of chunks at all. So it is quite 
trivial to create a new chunk type and write them into containers. We are 
focused on overall picture of ozone right now, and I would welcome any 
contribution you can make on EC and ozone chunks if that is a concern that you 
would like us to address earlier. From the architecture point of view I do not 
see any issues.

bq. Since LevelDB is being used for metadata storage and separately being 
replicated via Raft, are there concerns about metadata write amplification?
Metadata is such a small slice of information of a block – really what you are 
saying is that block Name, hash for the block gets written twice, once thru 
RAFT log and second time when RAFT commits this information. Since the data we 
are talking about is so small I am not worried about it at all.
 
bq. Can we re-use the QJM code instead of writing a new replicated log 
implementation? QJM is battle-tested, and bq.sensus is a known hard problem to 
get right.
We considered this, however the consensus is to write a *consensus protocol* 
that is easier to understand and make it easy for more contributors to work on 
it. The fact that QJM was not written as a library makes it very hard for us to 
pull it out in a clean fashion. Again if you feel very strongly about it, 
please feel free to move QJM to a library which can be reused and all of us 
will benefit from it.

bq. Are there concerns about storing millions of chunk files per disk? Writing 
each chunk as a separate file requires more metadata ops and fsyncs than 
appending to a file. We also need to be very careful to never require a full 
scan of the filesystem. The HDFS DN does full scans right now (DU, volume 
scanner).
Nothing in the chunk architecture assumes that chunk files are separate files. 
The fact that a chunk is a triplet \{FileName, Offset, Length\} gives you the 
flexibility to store 1000s of chunks in a physical file.
 
 bq. Any thoughts about how we go about packing multiple chunks into a larger 
file?
Yes, write the first chunk and then write the second chunk to the same file. In 
fact, chunks are specifically designed to address the small file problem. So 
two keys can point to a same file.
For example
KeyA -> \{File,0, 100\}
KeyB -> \{File,101, 1000\} Is a perfectly valid layout under container 
architecture
 
bq. Merges and splits of containers. We need nice large 5GB containers to hit 
the SCM scalability targets. Together, these factors mean Ozone will likely be 
doing many more merges and splits than HBase to keep the container size high
  
Ozone actively tries to avoid merges and tries to split only when needed. A 
container can be thought of as a really large block, so I am not sure if I am 
going to see anything other than standard block workload on containers.  The 
fact that containers can be split, is something that allows us to avoid 
pre-allocation of container space. That is merely a convenience and if you 
think of these as blocks,  you will see that it is very similar.
 
Ozone will never try to do merges and splits at HBase level. From the container 
and ozone perspective we are more focused on a good data distribution on the 
cluster – aka what the balancer does today, and containers are a flat namespace 
– just like blocks which we allocate when needed.
 
So once more – just make sure we are on the same page – Merges are rare(not 
required generally) and splits happen if we want to re-distribute data on a 
same machine.
 
bq. What kind of sharing do we get with HDFS, considering that HDFS doesn't use 
block containers, and the metadata services are separate from the NN? not 
shared?

Great question. We initially started off by attacking the scalability question 
of ozone and soon realized that HDFS scalability and ozone scalability has to 
solve the same problems. So the container infrastructure that we have built is 
something that can be used by both ozone and HDFS. Currently we are focused on 
ozone and containers will co-exist on datanodes with blockpools. That is ozone 
should be and will be deployable on a vanilla HDFS cluster. In 

[jira] [Commented] (HDFS-10400) hdfs dfs -put exits with zero on error

2016-05-16 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285969#comment-15285969
 ] 

Yiqun Lin commented on HDFS-10400:
--

I looked into tha code again. I found the method {{displayError}} will increase 
the {{numErrors}} count the IOException caught in inner method will not 
influence the result. So it seems there was some un-trapped exception happened 
here as [~jo_des...@yahoo.com] memtioned in comment. I would like to do 
improvement in two palces.

* Caught un-trapped exception in {{Command#run}} method.
* The exit code for errors here is not same as instruction in 
https://hadoop.apache.org/docs/r1.2.1/file_system_shell.html#put. Here is 
return the {{1}}.

I will post a patch later, thanks for review.

> hdfs dfs -put exits with zero on error
> --
>
> Key: HDFS-10400
> URL: https://issues.apache.org/jira/browse/HDFS-10400
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jo Desmet
>Assignee: Yiqun Lin
>
> On a filesystem that is about to fill up, execute "hdfs dfs -put" for a file 
> that is big enough to go over the limit. As a result, the command fails with 
> an exception, however the command terminates normally (exit code 0).
> Expectation is that any detectable failure generates an exit code different 
> than zero.
> Documentation on 
> https://hadoop.apache.org/docs/r1.2.1/file_system_shell.html#put states:
> Exit Code:
> Returns 0 on success and -1 on error. 
> following is the exception generated: 
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Exception in createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
> at 
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2282)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1352)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1271)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Abandoning 
> BP-1964113808-130.8.138.99-1446787670498:blk_1073835906_95114
> 16/05/11 13:37:08 INFO hdfs.DFSClient: Excluding datanode 
> DatanodeInfoWithStorage[130.8.138.99:50010,DS-eed7039a-8031-499e-85a5-7216b9d766a8,DISK]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10417) Actionable logs

2016-05-16 Thread Tianyin Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tianyin Xu updated HDFS-10417:
--
Attachment: HDFS-10417.000.patch

> Actionable logs 
> 
>
> Key: HDFS-10417
> URL: https://issues.apache.org/jira/browse/HDFS-10417
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Priority: Minor
> Attachments: HDFS-10417.000.patch
>
>
> The exception msg thrown by {{checkBlockLocalPathAccess}} is very specific to 
> the implementation detail. It's really hard for users to understand it unless 
> she reads and understands the code. 
> The code is shown as follows:
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
>   private void checkBlockLocalPathAccess() throws IOException {
> checkKerberosAuthMethod("getBlockLocalPathInfo()");
> String currentUser = 
> UserGroupInformation.getCurrentUser().getShortUserName();
> if (!usersWithLocalPathAccess.contains(currentUser)) {
>   throw new AccessControlException(
>   "Can't continue with getBlockLocalPathInfo() "
>   + "authorization. The user " + currentUser
>   + " is not allowed to call getBlockLocalPathInfo");
> }
>   }
> {code}
> (basically she needs to understand the code logic of getBlockLocalPathInfo)
> \\
> Note that {{usersWithLocalPathAccess}} is a *private final* purely coming 
> from the configuration settings of {{dfs.block.local-path-access.user}},
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
> private final List usersWithLocalPathAccess;
> 
> this.usersWithLocalPathAccess = Arrays.asList(
> 
> conf.getTrimmedStrings(DFSConfigKeys.DFS_BLOCK_LOCAL_PATH_ACCESS_USER_KEY));
> {code}
> In other word, the checking fails simply because the current user is not 
> specified in the configuration setting of 
> {{dfs.block.local-path-access.user}}. The log message should be much more 
> clearer to make it easy for users to take actions, as demonstrated in the 
> attached patch. 
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10417) Actionable logs

2016-05-16 Thread Tianyin Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tianyin Xu updated HDFS-10417:
--
Status: Patch Available  (was: Open)

> Actionable logs 
> 
>
> Key: HDFS-10417
> URL: https://issues.apache.org/jira/browse/HDFS-10417
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Priority: Minor
> Attachments: HDFS-10417.000.patch
>
>
> The exception msg thrown by {{checkBlockLocalPathAccess}} is very specific to 
> the implementation detail. It's really hard for users to understand it unless 
> she reads and understands the code. 
> The code is shown as follows:
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
>   private void checkBlockLocalPathAccess() throws IOException {
> checkKerberosAuthMethod("getBlockLocalPathInfo()");
> String currentUser = 
> UserGroupInformation.getCurrentUser().getShortUserName();
> if (!usersWithLocalPathAccess.contains(currentUser)) {
>   throw new AccessControlException(
>   "Can't continue with getBlockLocalPathInfo() "
>   + "authorization. The user " + currentUser
>   + " is not allowed to call getBlockLocalPathInfo");
> }
>   }
> {code}
> (basically she needs to understand the code logic of getBlockLocalPathInfo)
> \\
> Note that {{usersWithLocalPathAccess}} is a *private final* purely coming 
> from the configuration settings of {{dfs.block.local-path-access.user}},
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
> private final List usersWithLocalPathAccess;
> 
> this.usersWithLocalPathAccess = Arrays.asList(
> 
> conf.getTrimmedStrings(DFSConfigKeys.DFS_BLOCK_LOCAL_PATH_ACCESS_USER_KEY));
> {code}
> In other word, the checking fails simply because the current user is not 
> specified in the configuration setting of 
> {{dfs.block.local-path-access.user}}. The log message should be much more 
> clearer to make it easy for users to take actions, as demonstrated in the 
> attached patch. 
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10417) Actionable msgs for checkBlockLocalPathAccess

2016-05-16 Thread Tianyin Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tianyin Xu updated HDFS-10417:
--
Summary: Actionable msgs for checkBlockLocalPathAccess  (was: Actionable 
logs )

> Actionable msgs for checkBlockLocalPathAccess
> -
>
> Key: HDFS-10417
> URL: https://issues.apache.org/jira/browse/HDFS-10417
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Priority: Minor
> Attachments: HDFS-10417.000.patch
>
>
> The exception msg thrown by {{checkBlockLocalPathAccess}} is very specific to 
> the implementation detail. It's really hard for users to understand it unless 
> she reads and understands the code. 
> The code is shown as follows:
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
>   private void checkBlockLocalPathAccess() throws IOException {
> checkKerberosAuthMethod("getBlockLocalPathInfo()");
> String currentUser = 
> UserGroupInformation.getCurrentUser().getShortUserName();
> if (!usersWithLocalPathAccess.contains(currentUser)) {
>   throw new AccessControlException(
>   "Can't continue with getBlockLocalPathInfo() "
>   + "authorization. The user " + currentUser
>   + " is not allowed to call getBlockLocalPathInfo");
> }
>   }
> {code}
> (basically she needs to understand the code logic of getBlockLocalPathInfo)
> \\
> Note that {{usersWithLocalPathAccess}} is a *private final* purely coming 
> from the configuration settings of {{dfs.block.local-path-access.user}},
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
> private final List usersWithLocalPathAccess;
> 
> this.usersWithLocalPathAccess = Arrays.asList(
> 
> conf.getTrimmedStrings(DFSConfigKeys.DFS_BLOCK_LOCAL_PATH_ACCESS_USER_KEY));
> {code}
> In other word, the checking fails simply because the current user is not 
> specified in the configuration setting of 
> {{dfs.block.local-path-access.user}}. The log message should be much more 
> clearer to make it easy for users to take actions, as demonstrated in the 
> attached patch. 
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10416) Empty exception msg in the checking of superuser priviledge in DataNode

2016-05-16 Thread Tianyin Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tianyin Xu updated HDFS-10416:
--
Attachment: HDFS-10416.000.patch

> Empty exception msg in the checking of superuser priviledge in DataNode 
> 
>
> Key: HDFS-10416
> URL: https://issues.apache.org/jira/browse/HDFS-10416
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
> Attachments: HDFS-10416.000.patch
>
>
> In {{checkSuperuserPrivilege}} ({{DataNode.java}}), when the check fails, it 
> throws an empty {{AccessControlException}} object which is really confusing 
> for users to understand precisely what happened underneath the "permission 
> denied" error.
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
> private void checkSuperuserPrivilege() ... {
>   ...
>   // Not a superuser.
>   throw new AccessControlException();
> }
> {code}
> (the method is used in a number of DataNode operations like 
> {{refreshNamenodes}}, {{deleteBlockPool}}, {{shutdownDatanode}}, just listing 
> a few).
> \\
> As the comparison, if we look at the *exactly same method* implemented for 
> {{NameNode}}:
> {code:title=org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker|borderStyle=solid}
> public void checkSuperuserPrivilege() ... {
>   if (!isSuperUser()) {
> throw new AccessControlException("Access denied for user "
> + getUser() + ". Superuser privilege is required");
>   }
> }
> {code}
> The message is much more clear and easier to understand.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10416) Empty exception msg in the checking of superuser priviledge in DataNode

2016-05-16 Thread Tianyin Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tianyin Xu updated HDFS-10416:
--
Status: Patch Available  (was: Open)

> Empty exception msg in the checking of superuser priviledge in DataNode 
> 
>
> Key: HDFS-10416
> URL: https://issues.apache.org/jira/browse/HDFS-10416
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
> Attachments: HDFS-10416.000.patch
>
>
> In {{checkSuperuserPrivilege}} ({{DataNode.java}}), when the check fails, it 
> throws an empty {{AccessControlException}} object which is really confusing 
> for users to understand precisely what happened underneath the "permission 
> denied" error.
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
> private void checkSuperuserPrivilege() ... {
>   ...
>   // Not a superuser.
>   throw new AccessControlException();
> }
> {code}
> (the method is used in a number of DataNode operations like 
> {{refreshNamenodes}}, {{deleteBlockPool}}, {{shutdownDatanode}}, just listing 
> a few).
> \\
> As the comparison, if we look at the *exactly same method* implemented for 
> {{NameNode}}:
> {code:title=org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker|borderStyle=solid}
> public void checkSuperuserPrivilege() ... {
>   if (!isSuperUser()) {
> throw new AccessControlException("Access denied for user "
> + getUser() + ". Superuser privilege is required");
>   }
> }
> {code}
> The message is much more clear and easier to understand.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10416) Empty exception msg in the checking of superuser priviledge in DataNode

2016-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286012#comment-15286012
 ] 

Hadoop QA commented on HDFS-10416:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 25s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 25s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 25s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 1 new + 
174 unchanged - 0 fixed = 175 total (was 174) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 26s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 16s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 25s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 56s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804336/HDFS-10416.000.patch |
| JIRA Issue | HDFS-10416 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 41e51c196e8a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2c91fd8 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15457/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15457/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15457/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15457/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15457/artifact/patchprocess/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | 

[jira] [Commented] (HDFS-10400) hdfs dfs -put exits with zero on error

2016-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286034#comment-15286034
 ] 

Hadoop QA commented on HDFS-10400:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 8s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 34 unchanged - 1 fixed = 34 total (was 35) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 20s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 10s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestFsShellReturnCode |
|   | hadoop.fs.viewfs.TestViewFsTrash |
|   | hadoop.fs.TestFsShellCopy |
|   | hadoop.fs.TestTrash |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804333/HDFS-10400.001.patch |
| JIRA Issue | HDFS-10400 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a2ea0bcc5dc5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2c91fd8 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15456/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15456/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15456/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15456/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This 

[jira] [Created] (HDFS-10417) Actionable logs

2016-05-16 Thread Tianyin Xu (JIRA)
Tianyin Xu created HDFS-10417:
-

 Summary: Actionable logs 
 Key: HDFS-10417
 URL: https://issues.apache.org/jira/browse/HDFS-10417
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.7.2
Reporter: Tianyin Xu
Priority: Minor


The exception msg thrown by {{checkBlockLocalPathAccess}} is very specific to 
the implementation detail. It's really hard for users to understand it unless 
she reads and understands the code. 

The code is shown as follows:
{code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
  private void checkBlockLocalPathAccess() throws IOException {
checkKerberosAuthMethod("getBlockLocalPathInfo()");
String currentUser = 
UserGroupInformation.getCurrentUser().getShortUserName();
if (!usersWithLocalPathAccess.contains(currentUser)) {
  throw new AccessControlException(
  "Can't continue with getBlockLocalPathInfo() "
  + "authorization. The user " + currentUser
  + " is not allowed to call getBlockLocalPathInfo");
}
  }
{code}
(basically she needs to understand the code logic of getBlockLocalPathInfo)

\\

Note that {{usersWithLocalPathAccess}} is a *private final* purely coming from 
the configuration settings of {{dfs.block.local-path-access.user}},
{code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
private final List usersWithLocalPathAccess;

this.usersWithLocalPathAccess = Arrays.asList(
conf.getTrimmedStrings(DFSConfigKeys.DFS_BLOCK_LOCAL_PATH_ACCESS_USER_KEY));
{code}

In other word, the checking fails simply because the current user is not 
specified in the configuration setting of {{dfs.block.local-path-access.user}}. 
The log message should be much more clearer to make it easy for users to take 
actions, as demonstrated in the attached patch. 

Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10400) hdfs dfs -put exits with zero on error

2016-05-16 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10400:
-
Attachment: HDFS-10400.001.patch

> hdfs dfs -put exits with zero on error
> --
>
> Key: HDFS-10400
> URL: https://issues.apache.org/jira/browse/HDFS-10400
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jo Desmet
>Assignee: Yiqun Lin
> Attachments: HDFS-10400.001.patch
>
>
> On a filesystem that is about to fill up, execute "hdfs dfs -put" for a file 
> that is big enough to go over the limit. As a result, the command fails with 
> an exception, however the command terminates normally (exit code 0).
> Expectation is that any detectable failure generates an exit code different 
> than zero.
> Documentation on 
> https://hadoop.apache.org/docs/r1.2.1/file_system_shell.html#put states:
> Exit Code:
> Returns 0 on success and -1 on error. 
> following is the exception generated: 
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Exception in createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
> at 
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2282)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1352)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1271)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Abandoning 
> BP-1964113808-130.8.138.99-1446787670498:blk_1073835906_95114
> 16/05/11 13:37:08 INFO hdfs.DFSClient: Excluding datanode 
> DatanodeInfoWithStorage[130.8.138.99:50010,DS-eed7039a-8031-499e-85a5-7216b9d766a8,DISK]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10400) hdfs dfs -put exits with zero on error

2016-05-16 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10400:
-
Status: Patch Available  (was: Open)

> hdfs dfs -put exits with zero on error
> --
>
> Key: HDFS-10400
> URL: https://issues.apache.org/jira/browse/HDFS-10400
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jo Desmet
>Assignee: Yiqun Lin
> Attachments: HDFS-10400.001.patch
>
>
> On a filesystem that is about to fill up, execute "hdfs dfs -put" for a file 
> that is big enough to go over the limit. As a result, the command fails with 
> an exception, however the command terminates normally (exit code 0).
> Expectation is that any detectable failure generates an exit code different 
> than zero.
> Documentation on 
> https://hadoop.apache.org/docs/r1.2.1/file_system_shell.html#put states:
> Exit Code:
> Returns 0 on success and -1 on error. 
> following is the exception generated: 
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Exception in createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
> at 
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2282)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1352)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1271)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Abandoning 
> BP-1964113808-130.8.138.99-1446787670498:blk_1073835906_95114
> 16/05/11 13:37:08 INFO hdfs.DFSClient: Excluding datanode 
> DatanodeInfoWithStorage[130.8.138.99:50010,DS-eed7039a-8031-499e-85a5-7216b9d766a8,DISK]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10409) libhdfs++: Something is holding connection_state_lock in RpcConnectionImpl destructor

2016-05-16 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-10409:
---
Attachment: locked_dtor.patch

Simple reproducer.  Seems to reproduce consistently; originally found when 
pointing the client at a standby NN and doing various RPC calls.

> libhdfs++: Something is holding connection_state_lock in RpcConnectionImpl 
> destructor
> -
>
> Key: HDFS-10409
> URL: https://issues.apache.org/jira/browse/HDFS-10409
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: James Clampffer
> Attachments: locked_dtor.patch
>
>
> The destructor to RpcConnectionImpl grabs a lock using a std::lock_guard<>.  
> It turns out something is already holding the lock when this happens.  Best 
> bet is something that looks like:
> {code}
> void SomeFunctionThatShouldntTakeLock(){
>   std::lock_guard bad(connection_state_lock_)
>   conn_.reset(); //conn is a shared_ptr to RpcConnectionImpl
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10415) TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2

2016-05-16 Thread Sangjin Lee (JIRA)
Sangjin Lee created HDFS-10415:
--

 Summary: TestDistributedFileSystem#testDFSCloseOrdering() fails on 
branch-2
 Key: HDFS-10415
 URL: https://issues.apache.org/jira/browse/HDFS-10415
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.9.0
Reporter: Sangjin Lee


{noformat}
Tests run: 24, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 51.096 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.TestDistributedFileSystem
testDFSCloseOrdering(org.apache.hadoop.hdfs.TestDistributedFileSystem)  Time 
elapsed: 0.045 sec  <<< ERROR!
java.lang.NullPointerException: null
at 
org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:790)
at 
org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1417)
at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2084)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1187)
at 
org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSCloseOrdering(TestDistributedFileSystem.java:217)
{noformat}

This is with Java 8 on Mac. It passes fine on trunk. I haven't tried other 
combinations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10208) Addendum for HDFS-9579: to handle the case when client machine can't resolve network path

2016-05-16 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HDFS-10208:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha1
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed it to trunk and branch-2. I did notice an existing unit test failure 
on branch-2, for which I filed HDFS-10415.

Thanks [~mingma] for your contribution, and [~brahmareddy] for your review!

> Addendum for HDFS-9579: to handle the case when client machine can't resolve 
> network path
> -
>
> Key: HDFS-10208
> URL: https://issues.apache.org/jira/browse/HDFS-10208
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Ming Ma
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HDFS-10208-2.patch, HDFS-10208-3.patch, 
> HDFS-10208-4.patch, HDFS-10208-5.patch, HDFS-10208.patch
>
>
> If DFSClient runs on a machine that can't resolve network path, 
> {{DNSToSwitchMapping}} will return {{DEFAULT_RACK}}. In addition, if somehow 
> {{dnsToSwitchMapping.resolve}} returns null, that will cause exception when 
> it tries to create {{clientNode}}. In either case, there is no need to create 
> {{clientNode}} and we should treat its network distance with any datanode as 
> Integer.MAX_VALUE.
> {noformat}
> clientNode = new NodeBase(clientHostName,
> dnsToSwitchMapping.resolve(nodes).get(0));
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9579) Provide bytes-read-by-network-distance metrics at FileSystem.Statistics level

2016-05-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285883#comment-15285883
 ] 

Hudson commented on HDFS-9579:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9773 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9773/])
HDFS-10208. Addendum for HDFS-9579: to handle the case when client (sjlee: rev 
61f46be071e42f9eb49a54b1bd2e54feac59f808)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/net/TestNetworkTopology.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NodeBase.java


> Provide bytes-read-by-network-distance metrics at FileSystem.Statistics level
> -
>
> Key: HDFS-9579
> URL: https://issues.apache.org/jira/browse/HDFS-9579
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Ming Ma
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HDFS-9579-10.patch, HDFS-9579-2.patch, 
> HDFS-9579-3.patch, HDFS-9579-4.patch, HDFS-9579-5.patch, HDFS-9579-6.patch, 
> HDFS-9579-7.patch, HDFS-9579-8.patch, HDFS-9579-9.patch, 
> HDFS-9579-branch-2.patch, HDFS-9579.patch, MR job counters.png
>
>
> For cross DC distcp or other applications, it becomes useful to have insight 
> as to the traffic volume for each network distance to distinguish cross-DC 
> traffic, local-DC-remote-rack, etc.
> FileSystem's existing {{bytesRead}} metrics tracks all the bytes read. To 
> provide additional metrics for each network distance, we can add additional 
> metrics to FileSystem level and have {{DFSInputStream}} update the value 
> based on the network distance between client and the datanode.
> {{DFSClient}} will resolve client machine's network location as part of its 
> initialization. It doesn't need to resolve datanode's network location for 
> each read as {{DatanodeInfo}} already has the info.
> There are existing HDFS specific metrics such as {{ReadStatistics}} and 
> {{DFSHedgedReadMetrics}}. But these metrics are only accessible via 
> {{DFSClient}} or {{DFSInputStream}}. Not something that application framework 
> such as MR and Tez can get to. That is the benefit of storing these new 
> metrics in FileSystem.Statistics.
> This jira only includes metrics generation by HDFS. The consumption of these 
> metrics at MR and Tez will be tracked by separated jiras.
> We can add similar metrics for HDFS write scenario later if it is necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10416) Empty exception msg in the checking of superuser priviledge in DataNode

2016-05-16 Thread Tianyin Xu (JIRA)
Tianyin Xu created HDFS-10416:
-

 Summary: Empty exception msg in the checking of superuser 
priviledge in DataNode 
 Key: HDFS-10416
 URL: https://issues.apache.org/jira/browse/HDFS-10416
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.7.2
Reporter: Tianyin Xu


In {{checkSuperuserPrivilege}} ({{DataNode.java}}), when the check fails, it 
throws an empty {{AccessControlException}} object which is really confusing for 
users to understand precisely what happened underneath the "permission denied" 
error.

{code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
private void checkSuperuserPrivilege() ... {
  ...
  // Not a superuser.
  throw new AccessControlException();
}
{code}
(the method is used in a number of DataNode operations like 
{{refreshNamenodes}}, {{deleteBlockPool}}, {{shutdownDatanode}}, just listing a 
few).

\\

As the comparison, if we look at the *exactly same method* implemented for 
{{NameNode}}:
{code:title=org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker|borderStyle=solid}
public void checkSuperuserPrivilege() ... {
  if (!isSuperUser()) {
throw new AccessControlException("Access denied for user "
+ getUser() + ". Superuser privilege is required");
  }
}
{code}
The message is much more clear and easier to understand.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10236) Erasure Coding: Rename replication-based names in BlockManager to more generic [part-3]

2016-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285909#comment-15285909
 ] 

Hadoop QA commented on HDFS-10236:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 8s 
{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 9s 
{color} | {color:red} hadoop-hdfs in trunk failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
1s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 11s 
{color} | {color:red} hadoop-hdfs in trunk failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 9s 
{color} | {color:red} hadoop-hdfs in trunk failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 10s 
{color} | {color:red} hadoop-hdfs in trunk failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 9s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 10s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 10s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 29s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 1 new + 
428 unchanged - 1 fixed = 429 total (was 429) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 11s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 9s 
{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 10s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 13s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 13s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 41s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | hadoop.hdfs.TestAclsEndToEnd |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804113/HDFS-10236-00.patch |
| JIRA Issue | HDFS-10236 |
| Optional Tests |  asflicense  compile  javac  

[jira] [Updated] (HDFS-10236) Erasure Coding: Rename replication-based names in BlockManager to more generic [part-3]

2016-05-16 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-10236:

Target Version/s:   (was: )
  Status: Patch Available  (was: Open)

> Erasure Coding: Rename replication-based names in BlockManager to more 
> generic [part-3]
> ---
>
> Key: HDFS-10236
> URL: https://issues.apache.org/jira/browse/HDFS-10236
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10236-00.patch
>
>
> The idea of this jira is to rename the following entity in BlockManager as,
> {{getExpectedReplicaNum}} to {{getExpectedRedundancyNum}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10414) allow disabling trash on per-directory basis

2016-05-16 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HDFS-10414:
---

 Summary: allow disabling trash on per-directory basis
 Key: HDFS-10414
 URL: https://issues.apache.org/jira/browse/HDFS-10414
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Sergey Shelukhin


For ETL, it might be useful to disable trash for certain directories only to 
avoid the overhead, while keeping it enabled for rest of the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10208) Addendum for HDFS-9579: to handle the case when client machine can't resolve network path

2016-05-16 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285875#comment-15285875
 ] 

Ming Ma commented on HDFS-10208:


Thank you [~sjlee0] and [~brahmareddy]!

> Addendum for HDFS-9579: to handle the case when client machine can't resolve 
> network path
> -
>
> Key: HDFS-10208
> URL: https://issues.apache.org/jira/browse/HDFS-10208
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Ming Ma
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HDFS-10208-2.patch, HDFS-10208-3.patch, 
> HDFS-10208-4.patch, HDFS-10208-5.patch, HDFS-10208.patch
>
>
> If DFSClient runs on a machine that can't resolve network path, 
> {{DNSToSwitchMapping}} will return {{DEFAULT_RACK}}. In addition, if somehow 
> {{dnsToSwitchMapping.resolve}} returns null, that will cause exception when 
> it tries to create {{clientNode}}. In either case, there is no need to create 
> {{clientNode}} and we should treat its network distance with any datanode as 
> Integer.MAX_VALUE.
> {noformat}
> clientNode = new NodeBase(clientHostName,
> dnsToSwitchMapping.resolve(nodes).get(0));
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10208) Addendum for HDFS-9579: to handle the case when client machine can't resolve network path

2016-05-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15285882#comment-15285882
 ] 

Hudson commented on HDFS-10208:
---

FAILURE: Integrated in Hadoop-trunk-Commit #9773 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9773/])
HDFS-10208. Addendum for HDFS-9579: to handle the case when client (sjlee: rev 
61f46be071e42f9eb49a54b1bd2e54feac59f808)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/net/TestNetworkTopology.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ClientContext.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NodeBase.java


> Addendum for HDFS-9579: to handle the case when client machine can't resolve 
> network path
> -
>
> Key: HDFS-10208
> URL: https://issues.apache.org/jira/browse/HDFS-10208
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Ming Ma
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HDFS-10208-2.patch, HDFS-10208-3.patch, 
> HDFS-10208-4.patch, HDFS-10208-5.patch, HDFS-10208.patch
>
>
> If DFSClient runs on a machine that can't resolve network path, 
> {{DNSToSwitchMapping}} will return {{DEFAULT_RACK}}. In addition, if somehow 
> {{dnsToSwitchMapping.resolve}} returns null, that will cause exception when 
> it tries to create {{clientNode}}. In either case, there is no need to create 
> {{clientNode}} and we should treat its network distance with any datanode as 
> Integer.MAX_VALUE.
> {noformat}
> clientNode = new NodeBase(clientHostName,
> dnsToSwitchMapping.resolve(nodes).get(0));
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10363) Ozone: Introduce new config keys for SCM service endpoints

2016-05-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10363:
-
Attachment: HDFS-10363.02.patch

The v02 patch updates links in couple of exception messages to the Apache 
Hadoop wiki.

Also removed unnecessary reverse DNS lookups during SCM initialization that can 
add 1 minute to startup time.

> Ozone: Introduce new config keys for SCM service endpoints
> --
>
> Key: HDFS-10363
> URL: https://issues.apache.org/jira/browse/HDFS-10363
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-10363.01.patch, HDFS-10363.02.patch, 
> OzoneScmEndpointconfiguration.pdf, ozone-site.xml
>
>
> The SCM should have its own config keys to specify service addresses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10416) Empty exception msg in the checking of superuser priviledge in DataNode

2016-05-16 Thread Tianyin Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tianyin Xu updated HDFS-10416:
--
Attachment: (was: HDFS-10416.000.patch)

> Empty exception msg in the checking of superuser priviledge in DataNode 
> 
>
> Key: HDFS-10416
> URL: https://issues.apache.org/jira/browse/HDFS-10416
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
> Attachments: HDFS-10416.000.patch
>
>
> In {{checkSuperuserPrivilege}} ({{DataNode.java}}), when the check fails, it 
> throws an empty {{AccessControlException}} object which is really confusing 
> for users to understand precisely what happened underneath the "permission 
> denied" error.
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
> private void checkSuperuserPrivilege() ... {
>   ...
>   // Not a superuser.
>   throw new AccessControlException();
> }
> {code}
> (the method is used in a number of DataNode operations like 
> {{refreshNamenodes}}, {{deleteBlockPool}}, {{shutdownDatanode}}, just listing 
> a few).
> \\
> As the comparison, if we look at the *exactly same method* implemented for 
> {{NameNode}}:
> {code:title=org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker|borderStyle=solid}
> public void checkSuperuserPrivilege() ... {
>   if (!isSuperUser()) {
> throw new AccessControlException("Access denied for user "
> + getUser() + ". Superuser privilege is required");
>   }
> }
> {code}
> The message is much more clear and easier to understand.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10416) Empty exception msg in the checking of superuser priviledge in DataNode

2016-05-16 Thread Tianyin Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tianyin Xu updated HDFS-10416:
--
Attachment: HDFS-10416.000.patch

> Empty exception msg in the checking of superuser priviledge in DataNode 
> 
>
> Key: HDFS-10416
> URL: https://issues.apache.org/jira/browse/HDFS-10416
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
> Attachments: HDFS-10416.000.patch
>
>
> In {{checkSuperuserPrivilege}} ({{DataNode.java}}), when the check fails, it 
> throws an empty {{AccessControlException}} object which is really confusing 
> for users to understand precisely what happened underneath the "permission 
> denied" error.
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
> private void checkSuperuserPrivilege() ... {
>   ...
>   // Not a superuser.
>   throw new AccessControlException();
> }
> {code}
> (the method is used in a number of DataNode operations like 
> {{refreshNamenodes}}, {{deleteBlockPool}}, {{shutdownDatanode}}, just listing 
> a few).
> \\
> As the comparison, if we look at the *exactly same method* implemented for 
> {{NameNode}}:
> {code:title=org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker|borderStyle=solid}
> public void checkSuperuserPrivilege() ... {
>   if (!isSuperUser()) {
> throw new AccessControlException("Access denied for user "
> + getUser() + ". Superuser privilege is required");
>   }
> }
> {code}
> The message is much more clear and easier to understand.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10417) Actionable msgs for checkBlockLocalPathAccess

2016-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286086#comment-15286086
 ] 

Hadoop QA commented on HDFS-10417:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 58s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 42s {color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 108m 43s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead |
|   | hadoop.hdfs.TestAsyncDFSRename |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804331/HDFS-10417.000.patch |
| JIRA Issue | HDFS-10417 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b04494d73e26 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2c91fd8 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15455/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15455/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15455/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15455/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Actionable msgs for checkBlockLocalPathAccess
> -
>
> 

[jira] [Commented] (HDFS-2173) saveNamespace should not throw IOE when only one storage directory fails to write VERSION file

2016-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284268#comment-15284268
 ] 

Hadoop QA commented on HDFS-2173:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 27s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 3 new + 
62 unchanged - 0 fixed = 65 total (was 62) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 0s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 30s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 171m 46s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
|
|   | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804111/HDFS-2173.02.patch |
| JIRA Issue | HDFS-2173 |
| Optional Tests |  asflicense  compile  javac  javadoc  

[jira] [Commented] (HDFS-8449) Add tasks count metrics to datanode for ECWorker

2016-05-16 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284196#comment-15284196
 ] 

Kai Zheng commented on HDFS-8449:
-

+1 on the latest patch. Will commit it shortly.

> Add tasks count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8449
> URL: https://issues.apache.org/jira/browse/HDFS-8449
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8449-000.patch, HDFS-8449-001.patch, 
> HDFS-8449-002.patch, HDFS-8449-003.patch, HDFS-8449-004.patch, 
> HDFS-8449-005.patch, HDFS-8449-006.patch, HDFS-8449-007.patch, 
> HDFS-8449-008.patch, HDFS-8449-009.patch, HDFS-8449-010.patch, 
> HDFS-8449-v10.patch, HDFS-8449-v11.patch, HDFS-8449-v12.patch
>
>
> This sub task try to record ec recovery tasks that a datanode has done, 
> including total tasks, failed tasks and sucessful tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8449) Add tasks count metrics to datanode for ECWorker

2016-05-16 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284210#comment-15284210
 ] 

Kai Zheng commented on HDFS-8449:
-

Ops, it have committed it but found I can't resolve the issue due to the 
required operation disabled to me. Would anyone help resolve this anyway? 
Thanks!

> Add tasks count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8449
> URL: https://issues.apache.org/jira/browse/HDFS-8449
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8449-000.patch, HDFS-8449-001.patch, 
> HDFS-8449-002.patch, HDFS-8449-003.patch, HDFS-8449-004.patch, 
> HDFS-8449-005.patch, HDFS-8449-006.patch, HDFS-8449-007.patch, 
> HDFS-8449-008.patch, HDFS-8449-009.patch, HDFS-8449-010.patch, 
> HDFS-8449-v10.patch, HDFS-8449-v11.patch, HDFS-8449-v12.patch
>
>
> This sub task try to record ec recovery tasks that a datanode has done, 
> including total tasks, failed tasks and sucessful tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10407) Erasure Coding: Rename CorruptReplicasMap to CorruptRedundancyMap in BlockManager to more generic

2016-05-16 Thread Rakesh R (JIRA)
Rakesh R created HDFS-10407:
---

 Summary: Erasure Coding: Rename CorruptReplicasMap to 
CorruptRedundancyMap in BlockManager to more generic
 Key: HDFS-10407
 URL: https://issues.apache.org/jira/browse/HDFS-10407
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Rakesh R
Assignee: Rakesh R


The idea of this jira is to rename the following entity in BlockManager,

- {{CorruptReplicasMap}} to {{CorruptRedundancyMap}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9833) Erasure coding: recomputing block checksum on the fly by reconstructing the missed/corrupt block data

2016-05-16 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284207#comment-15284207
 ] 

Rakesh R commented on HDFS-9833:


The attached patch is addressing only one target datanode failure at a time and 
reconstruction it. I meant, when iterating over blockGroup, if it finds a 
missing index or an exception then will reconstruct this index data and 
re-calculate the block checksum for this block. How about optimizing the 
checksum recomputation logic to address multiple datanode failures and 
reconstructing it together through another sub-task?

> Erasure coding: recomputing block checksum on the fly by reconstructing the 
> missed/corrupt block data
> -
>
> Key: HDFS-9833
> URL: https://issues.apache.org/jira/browse/HDFS-9833
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Rakesh R
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-9833-00-draft.patch
>
>
> As discussed in HDFS-8430 and HDFS-9694, to compute striped file checksum 
> even some of striped blocks are missed, we need to consider recomputing block 
> checksum on the fly for the missed/corrupt blocks. To recompute the block 
> checksum, the block data needs to be reconstructed by erasure decoding, and 
> the main needed codes for the block reconstruction could be borrowed from 
> HDFS-9719, the refactoring of the existing {{ErasureCodingWorker}}. In EC 
> worker, reconstructed blocks need to be written out to target datanodes, but 
> here in this case, the remote writing isn't necessary, as the reconstructed 
> block data is only used to recompute the checksum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8449) Add tasks count metrics to datanode for ECWorker

2016-05-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284209#comment-15284209
 ] 

Hudson commented on HDFS-8449:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9765 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9765/])
HDFS-8449. Add tasks count metrics to datanode for ECWorker. Contributed 
(kai.zheng: rev ad9441122f31547fcab29f50e64d52a8895906b6)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReconstructStripedFile.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/StripedReconstructor.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/StripedFileTestUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeErasureCodingMetrics.java


> Add tasks count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8449
> URL: https://issues.apache.org/jira/browse/HDFS-8449
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8449-000.patch, HDFS-8449-001.patch, 
> HDFS-8449-002.patch, HDFS-8449-003.patch, HDFS-8449-004.patch, 
> HDFS-8449-005.patch, HDFS-8449-006.patch, HDFS-8449-007.patch, 
> HDFS-8449-008.patch, HDFS-8449-009.patch, HDFS-8449-010.patch, 
> HDFS-8449-v10.patch, HDFS-8449-v11.patch, HDFS-8449-v12.patch
>
>
> This sub task try to record ec recovery tasks that a datanode has done, 
> including total tasks, failed tasks and sucessful tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8449) Add tasks count metrics to datanode for ECWorker

2016-05-16 Thread Li Bo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Bo updated HDFS-8449:

  Resolution: Fixed
Target Version/s:   (was: )
  Status: Resolved  (was: Patch Available)

> Add tasks count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8449
> URL: https://issues.apache.org/jira/browse/HDFS-8449
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8449-000.patch, HDFS-8449-001.patch, 
> HDFS-8449-002.patch, HDFS-8449-003.patch, HDFS-8449-004.patch, 
> HDFS-8449-005.patch, HDFS-8449-006.patch, HDFS-8449-007.patch, 
> HDFS-8449-008.patch, HDFS-8449-009.patch, HDFS-8449-010.patch, 
> HDFS-8449-v10.patch, HDFS-8449-v11.patch, HDFS-8449-v12.patch
>
>
> This sub task try to record ec recovery tasks that a datanode has done, 
> including total tasks, failed tasks and sucessful tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10236) Erasure Coding: Rename replication-based names in BlockManager to more generic [part-3]

2016-05-16 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-10236:

Attachment: HDFS-10236-00.patch

Uploaded patch. [~zhz], kindly review the changes. Thanks!

> Erasure Coding: Rename replication-based names in BlockManager to more 
> generic [part-3]
> ---
>
> Key: HDFS-10236
> URL: https://issues.apache.org/jira/browse/HDFS-10236
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-10236-00.patch
>
>
> The idea of this jira is to rename the following entity in BlockManager as,
> {{getExpectedReplicaNum}} to {{getExpectedRedundancyNum}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10408) Add tests for out-of-order asynchronous rename/setPermission/setOwner

2016-05-16 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HDFS-10408:


 Summary: Add tests for out-of-order asynchronous 
rename/setPermission/setOwner
 Key: HDFS-10408
 URL: https://issues.apache.org/jira/browse/HDFS-10408
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaobing Zhou


HDFS-10224 and HDFS-10346 mostly test the batch style async request/response. 
Out-of-order case should also be tested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9833) Erasure coding: recomputing block checksum on the fly by reconstructing the missed/corrupt block data

2016-05-16 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284221#comment-15284221
 ] 

Kai Zheng commented on HDFS-9833:
-

Thanks [~rakeshr] for the big work and I will take some look giving my feedback.

bq. How about optimizing the checksum recomputation logic to address multiple 
datanode failures and reconstructing it together through another sub-task?
Sounds a good plan. Handling multiple block failures wouldn't impact big to the 
existing codes that can work for single block failure. This is similar to the 
ECWorker.

> Erasure coding: recomputing block checksum on the fly by reconstructing the 
> missed/corrupt block data
> -
>
> Key: HDFS-9833
> URL: https://issues.apache.org/jira/browse/HDFS-9833
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Rakesh R
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-9833-00-draft.patch
>
>
> As discussed in HDFS-8430 and HDFS-9694, to compute striped file checksum 
> even some of striped blocks are missed, we need to consider recomputing block 
> checksum on the fly for the missed/corrupt blocks. To recompute the block 
> checksum, the block data needs to be reconstructed by erasure decoding, and 
> the main needed codes for the block reconstruction could be borrowed from 
> HDFS-9719, the refactoring of the existing {{ErasureCodingWorker}}. In EC 
> worker, reconstructed blocks need to be written out to target datanodes, but 
> here in this case, the remote writing isn't necessary, as the reconstructed 
> block data is only used to recompute the checksum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8449) Add tasks count metrics to datanode for ECWorker

2016-05-16 Thread Li Bo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15284223#comment-15284223
 ] 

Li Bo commented on HDFS-8449:
-

Thanks for Kai's review and commit. I have just resolve this jira.

> Add tasks count metrics to datanode for ECWorker
> 
>
> Key: HDFS-8449
> URL: https://issues.apache.org/jira/browse/HDFS-8449
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Li Bo
>Assignee: Li Bo
> Attachments: HDFS-8449-000.patch, HDFS-8449-001.patch, 
> HDFS-8449-002.patch, HDFS-8449-003.patch, HDFS-8449-004.patch, 
> HDFS-8449-005.patch, HDFS-8449-006.patch, HDFS-8449-007.patch, 
> HDFS-8449-008.patch, HDFS-8449-009.patch, HDFS-8449-010.patch, 
> HDFS-8449-v10.patch, HDFS-8449-v11.patch, HDFS-8449-v12.patch
>
>
> This sub task try to record ec recovery tasks that a datanode has done, 
> including total tasks, failed tasks and sucessful tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10276) HDFS throws AccessControlException when checking for the existence of /a/b when /a is a file

2016-05-16 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-10276:
--
Attachment: HDFS-10276.005.patch

> HDFS throws AccessControlException when checking for the existence of /a/b 
> when /a is a file
> 
>
> Key: HDFS-10276
> URL: https://issues.apache.org/jira/browse/HDFS-10276
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kevin Cox
>Assignee: Yuanbo Liu
> Attachments: HDFS-10276.001.patch, HDFS-10276.002.patch, 
> HDFS-10276.003.patch, HDFS-10276.004.patch, HDFS-10276.005.patch
>
>
> Given you have a file {{/file}} an existence check for the path 
> {{/file/whatever}} will give different responses for different 
> implementations of FileSystem.
> LocalFileSystem will return false while DistributedFileSystem will throw 
> {{org.apache.hadoop.security.AccessControlException: Permission denied: ..., 
> access=EXECUTE, ...}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >