[jira] [Commented] (HDFS-10625) VolumeScanner to report why a block is found bad

2016-07-27 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15397133#comment-15397133
 ] 

Vinayakumar B commented on HDFS-10625:
--

bq. One comment from me for this: Can we print the log replica info when catch 
the IOException? So that we can not only see the replica info and also keep the 
original IOException.
I see that all current cases of {{sendBlock()}} can print the exception 
messages with blockId down the line. So IMO we can avoid multiple redundant 
logs about same Exception. Already similar cases are there which would flood 
the logs by logging redundant logs in multiple levels. Example, any exception 
during the writing, will log at multiple places.
Better we can avoid redundant logging.


>  VolumeScanner to report why a block is found bad
> -
>
> Key: HDFS-10625
> URL: https://issues.apache.org/jira/browse/HDFS-10625
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
>Assignee: Rushabh S Shah
>  Labels: supportability
> Attachments: HDFS-10625-1.patch, HDFS-10625.003.patch, 
> HDFS-10625.patch
>
>
> VolumeScanner may report:
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Reporting bad 
> blk_1170125248_96458336 on /d/dfs/dn
> {code}
> It would be helpful to report the reason why the block is bad, especially 
> when the block is corrupt, where is the first corrupted chunk in the block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10645) Make block report size as a metric and add this metric to datanode web ui

2016-07-27 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-10645:
--
Attachment: HDFS-10645.005.patch

> Make block report size as a metric and add this metric to datanode web ui
> -
>
> Key: HDFS-10645
> URL: https://issues.apache.org/jira/browse/HDFS-10645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-10645.001.patch, HDFS-10645.002.patch, 
> HDFS-10645.003.patch, HDFS-10645.004.patch, HDFS-10645.005.patch, 
> Selection_047.png, Selection_048.png
>
>
> Record block report size as a metric and show it on datanode UI. It's 
> important for administrators to know the bottleneck of  block report, and the 
> metric is also a good tuning metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10625) VolumeScanner to report why a block is found bad

2016-07-27 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10625:
-
Attachment: HDFS-10625.003.patch

Thanks a lot for the comment, [~vinayrpet]. 
{quote}
One problem here, is for the places which expects Specific exception such as 
ChecksumException or FileNotFoundException, they get IOException with cause set 
as ChecksumException or FNFE.
So its better to not to change in this. Let original IOException thrown back. 
Anyway DN logs will be there to catch the replica details.
{quote}
One comment from me for this: Can we print the log replica info when catch the 
IOException? So that we can not only see the replica info and also keep the 
original IOException. 

Other suggestions look good to me. Post a new patch for addressing the latest 
comment, thanks for review.


>  VolumeScanner to report why a block is found bad
> -
>
> Key: HDFS-10625
> URL: https://issues.apache.org/jira/browse/HDFS-10625
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
>Assignee: Rushabh S Shah
>  Labels: supportability
> Attachments: HDFS-10625-1.patch, HDFS-10625.003.patch, 
> HDFS-10625.patch
>
>
> VolumeScanner may report:
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Reporting bad 
> blk_1170125248_96458336 on /d/dfs/dn
> {code}
> It would be helpful to report the reason why the block is bad, especially 
> when the block is corrupt, where is the first corrupted chunk in the block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10645) Make block report size as a metric and add this metric to datanode web ui

2016-07-27 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15397033#comment-15397033
 ] 

Yuanbo Liu commented on HDFS-10645:
---

[~ajisakaa] Thanks for your comments
I changed a bit code because of HDFS-10301. But I don't think it effects the 
calculation of block size.
Upload v5 for review. Thank you again for your suggestion, that's very helpful !

> Make block report size as a metric and add this metric to datanode web ui
> -
>
> Key: HDFS-10645
> URL: https://issues.apache.org/jira/browse/HDFS-10645
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, ui
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-10645.001.patch, HDFS-10645.002.patch, 
> HDFS-10645.003.patch, HDFS-10645.004.patch, Selection_047.png, 
> Selection_048.png
>
>
> Record block report size as a metric and show it on datanode UI. It's 
> important for administrators to know the bottleneck of  block report, and the 
> metric is also a good tuning metric.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10691) FileDistribution fails in hdfs oiv command due to ArrayIndexOutOfBoundsException

2016-07-27 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396998#comment-15396998
 ] 

Akira Ajisaka commented on HDFS-10691:
--

Thanks [~linyiqun] for reporting this and providing the patches.
To avoid the exception, I'm thinking the following code is more clear. What do 
you think?
{code}
int bucket = Math.ceil((double)fileSize / steps);
if (bucket >= distribution.length) {
  bucket = distribution.length - 1;
}
{code}

> FileDistribution fails in hdfs oiv command due to 
> ArrayIndexOutOfBoundsException
> 
>
> Key: HDFS-10691
> URL: https://issues.apache.org/jira/browse/HDFS-10691
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10691.001.patch, HDFS-10691.002.patch
>
>
> I use hdfs oiv -p FileDistribution command to do a file analyse. But the 
> {{ArrayIndexOutOfBoundsException}} happened and lead the process terminated. 
> The stack infos:
> {code}
> Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 103
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.FileDistributionCalculator.run(FileDistributionCalculator.java:243)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.FileDistributionCalculator.visit(FileDistributionCalculator.java:176)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(OfflineImageViewerPB.java:176)
>   at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.main(OfflineImageViewerPB.java:129)
> {code}
> I looked into the code and I found the exception was threw in increasing 
> count of {{distribution}}. And the reason for the exception is that the 
> bucket number was more than the distribution's length.
> Here are my steps:
> 1).The input command params:
> {code}
> hdfs oiv -p FileDistribution -maxSize 104857600 -step 1024000
> {code}
> The {{numIntervals}} in code should be 104857600/1024000 =102(real 
> value:102.4), so the {{distribution}}'s length should be {{numIntervals}} + 1 
> = 103.
> 2).The {{ArrayIndexOutOfBoundsException}} will happens when the file size is 
> in range ((maxSize/step)*step, maxSize]. For example, if the size of one file 
> is 10480, and it's in range of size as mentioned before. And the bucket 
> number is calculated as 10480/1024000=102.3, then in code we do the 
> {{Math.ceil}} of this, so the final value should be 103. But the 
> {{distribution}}'s length is also 103, it means the index is from 0 to 102. 
> So the {{ArrayIndexOutOfBoundsException}} happens.
> In a word, the exception will happens when {{maxSize}} can not be divided by 
> {{step}} and meanwhile the size of file is in range ((maxSize/step)*step, 
> maxSize]. The related logic should be changed from 
> {code}
> int bucket = fileSize > maxSize ? distribution.length - 1 : (int) Math
> .ceil((double)fileSize / steps);
> {code}
> to 
> {code}
> int bucket =
> fileSize >= maxSize || fileSize > (maxSize / steps) * steps ?
> distribution.length - 1 : (int) Math.ceil((double) fileSize / 
> steps);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10625) VolumeScanner to report why a block is found bad

2016-07-27 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396985#comment-15396985
 ] 

Vinayakumar B commented on HDFS-10625:
--

bq. We can add a catch block here to catch the IOException thrown, then include 
the replica information and throw a new IO exception, e.g:
One problem here, is for the places which expects Specific exception such as 
{{ChecksumException}} or {{FileNotFoundException}}, they get IOException with 
cause set as ChecksumException or FNFE.
So its better to not to change in this. Let original IOException thrown back. 
Anyway DN logs will be there to catch the replica details.

bq. Looks like we can make this replica a member of BlockSender instead of a 
local variable here, so that we can refer to it when needed, such as for this 
jira. We probably should make replicaVisibleLength a member and report it as 
part of the replica info too, since when the writing is going on, this value 
may be changing concurrently.
Making ReplicaInfo a member is good, but making {{replicaVisibleLength}} a 
member may not be required. Because already {{endOffSet}} will be present which 
can decide how much BlockSender intended to read. So whenever required 
{{endOffset}} can be used.
Coming to checksum verfication, BlockSender will do checkSum verification for 
only finalized blocks via VolumeScanner. Not while reading(Reading case 
verification happens at the client). So we can expect replica can be finalized 
in this case and no change in the visibleLength.

So I feel, for the latest patch change required is, combining HDFS-10626, 
making replicaInfo a member and using to construct checksumException message.

>  VolumeScanner to report why a block is found bad
> -
>
> Key: HDFS-10625
> URL: https://issues.apache.org/jira/browse/HDFS-10625
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
>Assignee: Rushabh S Shah
>  Labels: supportability
> Attachments: HDFS-10625-1.patch, HDFS-10625.patch
>
>
> VolumeScanner may report:
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Reporting bad 
> blk_1170125248_96458336 on /d/dfs/dn
> {code}
> It would be helpful to report the reason why the block is bad, especially 
> when the block is corrupt, where is the first corrupted chunk in the block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10519) Add a configuration option to enable in-progress edit log tailing

2016-07-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396986#comment-15396986
 ] 

Hudson commented on HDFS-10519:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #10166 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10166/])
HDFS-10519. Add a configuration option to enable in-progress edit log (wang: 
rev 098ec2b11ff3f677eb823f75b147a1ac8dbf959e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestGenericJournalConf.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorageRetentionManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FileJournalManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/LogsPurgeable.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/src/main/java/org/apache/hadoop/contrib/bkjournal/BookKeeperJournalManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/JournalSet.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestFailureToReadEdits.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/protocol/RemoteEditLogManifest.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/BackupJournalManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumJournalManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFileJournalManager.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/protocolPB/TestPBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestStandbyInProgressTail.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNNStorageRetentionManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournal.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumOutputStream.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/HdfsServer.proto


> Add a configuration option to enable in-progress edit log tailing
> -
>
> Key: HDFS-10519
> URL: https://issues.apache.org/jira/browse/HDFS-10519
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10519.001.patch, HDFS-10519.002.patch, 
> HDFS-10519.003.patch, HDFS-10519.004.patch, HDFS-10519.005.patch, 
> HDFS-10519.006.patch, HDFS-10519.007.patch, HDFS-10519.008.patch, 
> HDFS-10519.009.patch, HDFS-10519.010.patch, HDFS-10519.011.patch, 
> HDFS-10519.012.patch, HDFS-10519.013.patch
>
>
> Standby Namenode has the option to do in-progress edit log tailing to improve 
> the data freshness. In-progress tailing is already implemented, but it's not 
> enabled as default configuration. And there's no related configuration key to 
> turn it on.
> Adding a related configuration key to let Standby Namenode is reasonable and 
> would be a basis for further improvement on Standby Namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6962) ACLs inheritance conflict with umaskmode

2016-07-27 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396977#comment-15396977
 ] 

John Zhuge commented on HDFS-6962:
--

[~cnauroth] and [~eddyxu], have you got a chance to look at 009? I believe all 
major review issues are resolved.

I plan to run all Hadoop unit tests twice: one with the flag off and one with 
the flag on.

> ACLs inheritance conflict with umaskmode
> 
>
> Key: HDFS-6962
> URL: https://issues.apache.org/jira/browse/HDFS-6962
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
> Environment: CentOS release 6.5 (Final)
>Reporter: LINTE
>Assignee: John Zhuge
>Priority: Critical
>  Labels: hadoop, security
> Attachments: HDFS-6962.001.patch, HDFS-6962.002.patch, 
> HDFS-6962.003.patch, HDFS-6962.004.patch, HDFS-6962.005.patch, 
> HDFS-6962.006.patch, HDFS-6962.007.patch, HDFS-6962.008.patch, 
> HDFS-6962.009.patch, HDFS-6962.1.patch, disabled_new_client.log, 
> disabled_old_client.log, enabled_new_client.log, enabled_old_client.log, run
>
>
> In hdfs-site.xml 
> 
> dfs.umaskmode
> 027
> 
> 1/ Create a directory as superuser
> bash# hdfs dfs -mkdir  /tmp/ACLS
> 2/ set default ACLs on this directory rwx access for group readwrite and user 
> toto
> bash# hdfs dfs -setfacl -m default:group:readwrite:rwx /tmp/ACLS
> bash# hdfs dfs -setfacl -m default:user:toto:rwx /tmp/ACLS
> 3/ check ACLs /tmp/ACLS/
> bash# hdfs dfs -getfacl /tmp/ACLS/
> # file: /tmp/ACLS
> # owner: hdfs
> # group: hadoop
> user::rwx
> group::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> user::rwx | group::r-x | other::--- matches with the umaskmode defined in 
> hdfs-site.xml, everything ok !
> default:group:readwrite:rwx allow readwrite group with rwx access for 
> inhéritance.
> default:user:toto:rwx allow toto user with rwx access for inhéritance.
> default:mask::rwx inhéritance mask is rwx, so no mask
> 4/ Create a subdir to test inheritance of ACL
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs
> 5/ check ACLs /tmp/ACLS/hdfs
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs
> # file: /tmp/ACLS/hdfs
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:r-x
> group::r-x
> group:readwrite:rwx #effective:r-x
> mask::r-x
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> Here we can see that the readwrite group has rwx ACL bu only r-x is effective 
> because the mask is r-x (mask::r-x) in spite of default mask for inheritance 
> is set to default:mask::rwx on /tmp/ACLS/
> 6/ Modifiy hdfs-site.xml et restart namenode
> 
> dfs.umaskmode
> 010
> 
> 7/ Create a subdir to test inheritance of ACL with new parameter umaskmode
> bash# hdfs dfs -mkdir  /tmp/ACLS/hdfs2
> 8/ Check ACL on /tmp/ACLS/hdfs2
> bash# hdfs dfs -getfacl /tmp/ACLS/hdfs2
> # file: /tmp/ACLS/hdfs2
> # owner: hdfs
> # group: hadoop
> user::rwx
> user:toto:rwx   #effective:rw-
> group::r-x  #effective:r--
> group:readwrite:rwx #effective:rw-
> mask::rw-
> other::---
> default:user::rwx
> default:user:toto:rwx
> default:group::r-x
> default:group:readwrite:rwx
> default:mask::rwx
> default:other::---
> So HDFS masks the ACL value (user, group and other  -- exepted the POSIX 
> owner -- ) with the group mask of dfs.umaskmode properties when creating 
> directory with inherited ACL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396976#comment-15396976
 ] 

Hadoop QA commented on HDFS-4176:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-4176 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820581/HDFS-4176.03.patch |
| JIRA Issue | HDFS-4176 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16227/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> EditLogTailer should call rollEdits with a timeout
> --
>
> Key: HDFS-4176
> URL: https://issues.apache.org/jira/browse/HDFS-4176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 2.0.2-alpha, 3.0.0-alpha1
>Reporter: Todd Lipcon
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-4176.00.patch, HDFS-4176.01.patch, 
> HDFS-4176.02.patch, HDFS-4176.03.patch, namenode.jstack4
>
>
> When the EditLogTailer thread calls rollEdits() on the active NN via RPC, it 
> currently does so without a timeout. So, if the active NN has frozen (but not 
> actually crashed), this call can hang forever. This can then potentially 
> prevent the standby from becoming active.
> This may actually considered a side effect of HADOOP-6762 -- if the RPC were 
> interruptible, that would also fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-27 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4176:

Hadoop Flags: Reviewed

> EditLogTailer should call rollEdits with a timeout
> --
>
> Key: HDFS-4176
> URL: https://issues.apache.org/jira/browse/HDFS-4176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 2.0.2-alpha, 3.0.0-alpha1
>Reporter: Todd Lipcon
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-4176.00.patch, HDFS-4176.01.patch, 
> HDFS-4176.02.patch, HDFS-4176.03.patch, namenode.jstack4
>
>
> When the EditLogTailer thread calls rollEdits() on the active NN via RPC, it 
> currently does so without a timeout. So, if the active NN has frozen (but not 
> actually crashed), this call can hang forever. This can then potentially 
> prevent the standby from becoming active.
> This may actually considered a side effect of HADOOP-6762 -- if the RPC were 
> interruptible, that would also fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-27 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396935#comment-15396935
 ] 

Jing Zhao commented on HDFS-4176:
-

TestHDFSCLI failure is tracked by HDFS-10696, the other two tests passed in my 
local machine and the failures should be unrelated. +1 on the latest patch. 
Thanks for the contribution, [~eddyxu]!

> EditLogTailer should call rollEdits with a timeout
> --
>
> Key: HDFS-4176
> URL: https://issues.apache.org/jira/browse/HDFS-4176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 2.0.2-alpha, 3.0.0-alpha1
>Reporter: Todd Lipcon
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-4176.00.patch, HDFS-4176.01.patch, 
> HDFS-4176.02.patch, HDFS-4176.03.patch, namenode.jstack4
>
>
> When the EditLogTailer thread calls rollEdits() on the active NN via RPC, it 
> currently does so without a timeout. So, if the active NN has frozen (but not 
> actually crashed), this call can hang forever. This can then potentially 
> prevent the standby from becoming active.
> This may actually considered a side effect of HADOOP-6762 -- if the RPC were 
> interruptible, that would also fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10700) I increase the value of the GC_OPTS on namenode. After I modified the value ,namenode start failed.

2016-07-27 Thread Liu Guannan (JIRA)
Liu Guannan created HDFS-10700:
--

 Summary: I increase the value of the GC_OPTS on namenode. After I 
modified the value ,namenode start failed.
 Key: HDFS-10700
 URL: https://issues.apache.org/jira/browse/HDFS-10700
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.2
 Environment: Linux Suse 11 SP3
Reporter: Liu Guannan


I increase the value of the GC_OPTS on namenode. After I modified the value 
,namenode start failed.The reasion is that Datanodes reported  block status to 
the namenode, resulting in namenode update block status slowly. And then 
namenode start failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10689) Hdfs dfs chmod should reset sticky bit permission when the bit is omitted in the octal mode

2016-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396832#comment-15396832
 ] 

Hadoop QA commented on HDFS-10689:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 14m  
2s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
40s{color} | {color:green} root: The patch generated 0 new + 219 unchanged - 2 
fixed = 219 total (was 221) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 10s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}177m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.cli.TestHDFSCLI |
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820608/HDFS-10689.003.patch |
| JIRA Issue | HDFS-10689 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 56881d56a349 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b43de80 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16224/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16224/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16224/testReport

[jira] [Commented] (HDFS-10689) Hdfs dfs chmod should reset sticky bit permission when the bit is omitted in the octal mode

2016-07-27 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396816#comment-15396816
 ] 

Manoj Govindassamy commented on HDFS-10689:
---

Unit test failures are unrelated to this patch. 

> Hdfs dfs chmod should reset sticky bit permission when the bit is omitted in 
> the octal mode
> ---
>
> Key: HDFS-10689
> URL: https://issues.apache.org/jira/browse/HDFS-10689
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Attachments: HDFS-10689.001.patch, HDFS-10689.002.patch, 
> HDFS-10689.003.patch
>
>
> When a directory permission is modified using hdfs dfs chmod command and when 
> octal/numeric format is used, the leading sticky bit is not fully honored.
> 1. Create a dir dir_test_with_sticky_bit
> 2. Apply sticky bit permission on the dir : hdfs dfs -chmod 1755 
> /dir_test_with_sticky_bit
> 3. Remove sticky bit permission on the dir: hdfs dfs -chmod 755 
> /dir_test_with_sticky_bit
> Expected: Remove the sticky bit on the dir, as it happens on Mac/Linux native 
> filesystem with native chmod.
> 4. However, removing sticky bit permission by explicitly turning off the bit 
> works. hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> {noformat}
> manoj@~/work/hadev-pp: hdfs dfs -chmod 1755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit  <=== sticky bit still intact
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10689) Hdfs dfs chmod should reset sticky bit permission when the bit is omitted in the octal mode

2016-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396810#comment-15396810
 ] 

Hadoop QA commented on HDFS-10689:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  9m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} root: The patch generated 0 new + 219 unchanged - 2 
fixed = 219 total (was 221) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
20s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.cli.TestHDFSCLI |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820608/HDFS-10689.003.patch |
| JIRA Issue | HDFS-10689 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b424b1192cd7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b43de80 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16226/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16226/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 
https://builds.apache.org/job/PreCommi

[jira] [Updated] (HDFS-10699) Log object instance get incorrectly in TestDFSAdmin

2016-07-27 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10699:
-
Status: Patch Available  (was: Open)

> Log object instance get incorrectly in TestDFSAdmin
> ---
>
> Key: HDFS-10699
> URL: https://issues.apache.org/jira/browse/HDFS-10699
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10699.001.patch
>
>
> In class TestDFSAdmin, it gets a incorrect object instance. The codes:
> {code}
>  public class TestDFSAdmin {
>private static final Log LOG = LogFactory.getLog(DFSAdmin.class);
>private Configuration conf = null;
>private MiniDFSCluster cluster;
>private DFSAdmin admin;
>...
> {code}
> Here the class name {{DFSAdmin}} should be {{TestDFSAdmin}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10699) Log object instance get incorrectly in TestDFSAdmin

2016-07-27 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10699:
-
Attachment: HDFS-10699.001.patch

Attach a simple patch for fix this.

> Log object instance get incorrectly in TestDFSAdmin
> ---
>
> Key: HDFS-10699
> URL: https://issues.apache.org/jira/browse/HDFS-10699
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-10699.001.patch
>
>
> In class TestDFSAdmin, it gets a incorrect object instance. The codes:
> {code}
>  public class TestDFSAdmin {
>private static final Log LOG = LogFactory.getLog(DFSAdmin.class);
>private Configuration conf = null;
>private MiniDFSCluster cluster;
>private DFSAdmin admin;
>...
> {code}
> Here the class name {{DFSAdmin}} should be {{TestDFSAdmin}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10699) Log object instance get incorrectly in TestDFSAdmin

2016-07-27 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-10699:


 Summary: Log object instance get incorrectly in TestDFSAdmin
 Key: HDFS-10699
 URL: https://issues.apache.org/jira/browse/HDFS-10699
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yiqun Lin
Assignee: Yiqun Lin
Priority: Minor


In class TestDFSAdmin, it gets a incorrect object instance. The codes:
{code}
 public class TestDFSAdmin {
   private static final Log LOG = LogFactory.getLog(DFSAdmin.class);
   private Configuration conf = null;
   private MiniDFSCluster cluster;
   private DFSAdmin admin;
   ...
{code}
Here the class name {{DFSAdmin}} should be {{TestDFSAdmin}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10681) DiskBalancer: query command should report Plan file path apart from PlanID

2016-07-27 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-10681:
--
Attachment: HDFS-10681.002.patch

Attaching v002 patch with checkstyle issues fixed.
Other unit test failures are not related to the patch.

> DiskBalancer: query command should report Plan file path apart from PlanID
> --
>
> Key: HDFS-10681
> URL: https://issues.apache.org/jira/browse/HDFS-10681
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: diskbalancer
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Attachments: HDFS-10681.001.patch, HDFS-10681.002.patch
>
>
> DiskBalancer query command currently reports planID (SHA512 hex) only. 
> Currently ongoing disk balancing activity in a datanode can be cancelled 
> wither by planID + datanode_address or just by pointing to the right plan 
> file. Since there could be many plan files, to avoid ambiguity its better if 
> query command can report the plan file path also.
> {noformat}
> $ hdfs diskbalancer --help query 
> usage: hdfs diskbalancer -query   [options]
> Query Plan queries a given data node about the current state of disk
> balancer execution.
> --queryQueries the disk balancer status of a given datanode.
> Query command retrievs *the plan ID* and the current running state.
> {noformat}
> Sample query command output:
> {noformat}
> 16/06/20 15:42:16 INFO command.Command: Executing "query plan" command.
> Plan ID: 
> 04f41e2e1fa2d63558284be85155ea68154fb6ab435f1078c642d605d06626f176da16b321b35c99f1f6cd0cd77090c8743bb9a19190c4a01b5f8c51a515e240
>  Result: PLAN_UNDER_PROGRESS
> or
> 16/06/20 15:46:09 INFO command.Command: Executing "query plan" command.
> Plan ID: 
> 04f41e2e1fa2d63558284be85155ea68154fb6ab435f1078c642d605d06626f176da16b321b35c99f1f6cd0cd77090c8743bb9a19190c4a01b5f8c51a515e240
>  Result: PLAN_DONE
> {noformat}
> Cancel command syntax:
> {noformat}
> $ hdfs diskbalancer --help cancel
> *usage: hdfs diskbalancer -cancel  | -cancel  -node
> *
> Cancel command cancels a running disk balancer operation.
> --cancelCancels a running plan using a plan file.
> --node  Cancels a running plan using a plan ID and hostName
> Cancel command can be run via pointing to a plan file, or by reading the
> plan ID using the query command and then using planID and hostname.
> Examples of how to run this command are
> hdfs diskbalancer -cancel 
> hdfs diskbalancer -cancel  -node 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10681) DiskBalancer: query command should report Plan file path apart from PlanID

2016-07-27 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-10681:
--
Attachment: (was: HDFS-10681-HDFS-1312.002.patch)

> DiskBalancer: query command should report Plan file path apart from PlanID
> --
>
> Key: HDFS-10681
> URL: https://issues.apache.org/jira/browse/HDFS-10681
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: diskbalancer
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Attachments: HDFS-10681.001.patch
>
>
> DiskBalancer query command currently reports planID (SHA512 hex) only. 
> Currently ongoing disk balancing activity in a datanode can be cancelled 
> wither by planID + datanode_address or just by pointing to the right plan 
> file. Since there could be many plan files, to avoid ambiguity its better if 
> query command can report the plan file path also.
> {noformat}
> $ hdfs diskbalancer --help query 
> usage: hdfs diskbalancer -query   [options]
> Query Plan queries a given data node about the current state of disk
> balancer execution.
> --queryQueries the disk balancer status of a given datanode.
> Query command retrievs *the plan ID* and the current running state.
> {noformat}
> Sample query command output:
> {noformat}
> 16/06/20 15:42:16 INFO command.Command: Executing "query plan" command.
> Plan ID: 
> 04f41e2e1fa2d63558284be85155ea68154fb6ab435f1078c642d605d06626f176da16b321b35c99f1f6cd0cd77090c8743bb9a19190c4a01b5f8c51a515e240
>  Result: PLAN_UNDER_PROGRESS
> or
> 16/06/20 15:46:09 INFO command.Command: Executing "query plan" command.
> Plan ID: 
> 04f41e2e1fa2d63558284be85155ea68154fb6ab435f1078c642d605d06626f176da16b321b35c99f1f6cd0cd77090c8743bb9a19190c4a01b5f8c51a515e240
>  Result: PLAN_DONE
> {noformat}
> Cancel command syntax:
> {noformat}
> $ hdfs diskbalancer --help cancel
> *usage: hdfs diskbalancer -cancel  | -cancel  -node
> *
> Cancel command cancels a running disk balancer operation.
> --cancelCancels a running plan using a plan file.
> --node  Cancels a running plan using a plan ID and hostName
> Cancel command can be run via pointing to a plan file, or by reading the
> plan ID using the query command and then using planID and hostname.
> Examples of how to run this command are
> hdfs diskbalancer -cancel 
> hdfs diskbalancer -cancel  -node 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10681) DiskBalancer: query command should report Plan file path apart from PlanID

2016-07-27 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-10681:
--
Attachment: (was: HDFS-10681-HDFS-1312.001.patch)

> DiskBalancer: query command should report Plan file path apart from PlanID
> --
>
> Key: HDFS-10681
> URL: https://issues.apache.org/jira/browse/HDFS-10681
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: diskbalancer
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Attachments: HDFS-10681.001.patch
>
>
> DiskBalancer query command currently reports planID (SHA512 hex) only. 
> Currently ongoing disk balancing activity in a datanode can be cancelled 
> wither by planID + datanode_address or just by pointing to the right plan 
> file. Since there could be many plan files, to avoid ambiguity its better if 
> query command can report the plan file path also.
> {noformat}
> $ hdfs diskbalancer --help query 
> usage: hdfs diskbalancer -query   [options]
> Query Plan queries a given data node about the current state of disk
> balancer execution.
> --queryQueries the disk balancer status of a given datanode.
> Query command retrievs *the plan ID* and the current running state.
> {noformat}
> Sample query command output:
> {noformat}
> 16/06/20 15:42:16 INFO command.Command: Executing "query plan" command.
> Plan ID: 
> 04f41e2e1fa2d63558284be85155ea68154fb6ab435f1078c642d605d06626f176da16b321b35c99f1f6cd0cd77090c8743bb9a19190c4a01b5f8c51a515e240
>  Result: PLAN_UNDER_PROGRESS
> or
> 16/06/20 15:46:09 INFO command.Command: Executing "query plan" command.
> Plan ID: 
> 04f41e2e1fa2d63558284be85155ea68154fb6ab435f1078c642d605d06626f176da16b321b35c99f1f6cd0cd77090c8743bb9a19190c4a01b5f8c51a515e240
>  Result: PLAN_DONE
> {noformat}
> Cancel command syntax:
> {noformat}
> $ hdfs diskbalancer --help cancel
> *usage: hdfs diskbalancer -cancel  | -cancel  -node
> *
> Cancel command cancels a running disk balancer operation.
> --cancelCancels a running plan using a plan file.
> --node  Cancels a running plan using a plan ID and hostName
> Cancel command can be run via pointing to a plan file, or by reading the
> plan ID using the query command and then using planID and hostname.
> Examples of how to run this command are
> hdfs diskbalancer -cancel 
> hdfs diskbalancer -cancel  -node 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java

2016-07-27 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10690:
-
Assignee: Fenghua Hu

> Optimize insertion/removal of replica in ShortCircuitCache.java
> ---
>
> Key: HDFS-10690
> URL: https://issues.apache.org/jira/browse/HDFS-10690
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Fenghua Hu
>Assignee: Fenghua Hu
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently in ShortCircuitCache, two TreeMap objects are used to track the 
> cached replicas.
> private final TreeMap evictable = new TreeMap<>();
> private final TreeMap evictableMmapped = new 
> TreeMap<>();
> TreeMap employs Red-Black tree for sorting. This isn't an issue when using 
> traditional HDD. But when using high-performance SSD/PCIe Flash, the cost 
> inserting/removing an entry  becomes considerable.
> To mitigate it, we designed a new list-based for replica tracking.
> The list is a double-linked FIFO. FIFO is time-based, thus insertion is a 
> very low cost operation. On the other hand, list is not lookup-friendly. To 
> address this issue, we introduce two references into ShortCircuitReplica 
> object.
> ShortCircuitReplica next = null;
> ShortCircuitReplica prev = null;
> In this way, lookup is not needed when removing a replica from the list. We 
> only need to modify its predecessor's and successor's references in the lists.
> Our tests showed up to 15-50% performance improvement when using PCIe flash 
> as storage media.
> The original patch is against 2.6.4, now I am porting to Hadoop trunk, and 
> patch will be posted soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10457) DataNode should not auto-format block pool directory if VERSION is missing

2016-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396785#comment-15396785
 ] 

Hadoop QA commented on HDFS-10457:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.cli.TestHDFSCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806061/HDFS-10457.001.patch |
| JIRA Issue | HDFS-10457 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f5ac2cc5f371 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b43de80 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16225/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16225/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16225/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DataNode should not auto-format block pool directory if VERSION is missing
> --
>
> Key: HDFS-10457
> URL: https://issues.apache.org/jira/browse/H

[jira] [Commented] (HDFS-10625) VolumeScanner to report why a block is found bad

2016-07-27 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396781#comment-15396781
 ] 

Yiqun Lin commented on HDFS-10625:
--

Thanks [~yzhangal] for the comment. I will post a new patch for this after 
[~vinayrpet]'s feedback.

>  VolumeScanner to report why a block is found bad
> -
>
> Key: HDFS-10625
> URL: https://issues.apache.org/jira/browse/HDFS-10625
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
>Assignee: Rushabh S Shah
>  Labels: supportability
> Attachments: HDFS-10625-1.patch, HDFS-10625.patch
>
>
> VolumeScanner may report:
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Reporting bad 
> blk_1170125248_96458336 on /d/dfs/dn
> {code}
> It would be helpful to report the reason why the block is bad, especially 
> when the block is corrupt, where is the first corrupted chunk in the block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10696) TestHDFSCLI fails

2016-07-27 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-10696:
--
Attachment: HDFS-10696.02.patch

> TestHDFSCLI fails
> -
>
> Key: HDFS-10696
> URL: https://issues.apache.org/jira/browse/HDFS-10696
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Kai Sasaki
> Attachments: HDFS-10696.01.patch, HDFS-10696.02.patch
>
>
> TestHDFSCLI fails.
> {noformat}2016-07-27 19:53:20,790 [main] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(177)) -  Comparator: 
> [RegexpComparator]
> 2016-07-27 19:53:20,790 [main] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(179)) -  Comparision result:   
> [fail]
> 2016-07-27 19:53:20,791 [main] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(181)) - Expected output:   [^( 
> |\t)*The storage type specific quota is cleared when -storageType option is 
> specified.( )*]
> 2016-07-27 19:53:20,791 [main] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(183)) -   Actual output:   
> [-clrSpaceQuota [-storageType ] ...: Clear the 
> space quota for each directory .
> For each directory, attempt to clear the quota. An error will 
> be reported if
> 1. the directory does not exist or is a file, or
> 2. user is not an administrator.
> It does not fault if the directory has no quota.
> The storage type specific quota is cleared when -storageType 
> option is specified.   Available storageTypes are 
> - RAM_DISK
> - DISK
> - SSD
> - ARCHIVE
> ]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396759#comment-15396759
 ] 

Hadoop QA commented on HDFS-10676:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 66 unchanged - 0 fixed = 70 total (was 66) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.cli.TestHDFSCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820598/HDFS-10676.005.patch |
| JIRA Issue | HDFS-10676 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 108d3a035153 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b43de80 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16223/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16223/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16223/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16223/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add namenode metric to measure time spent in generating EDEKs
> -

[jira] [Updated] (HDFS-10457) DataNode should not auto-format block pool directory if VERSION is missing

2016-07-27 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-10457:
---
Attachment: HDFS-10457.002.patch

v02: the simple fix plus a regression test.

> DataNode should not auto-format block pool directory if VERSION is missing
> --
>
> Key: HDFS-10457
> URL: https://issues.apache.org/jira/browse/HDFS-10457
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10457.001.patch, HDFS-10457.002.patch
>
>
> HDFS-10360 prevents DN to auto-formats a volume directory if the 
> current/VERSION is missing. However, if instead, the current/VERSION in a 
> block pool directory is missing, DN still auto-formats the directory.
> Filing this jira to fix the bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10519) Add a configuration option to enable in-progress edit log tailing

2016-07-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396748#comment-15396748
 ] 

Andrew Wang commented on HDFS-10519:


I think this would be pretty easy to backport to branch-2 if there's demand, 
just a few small conflicts.

> Add a configuration option to enable in-progress edit log tailing
> -
>
> Key: HDFS-10519
> URL: https://issues.apache.org/jira/browse/HDFS-10519
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10519.001.patch, HDFS-10519.002.patch, 
> HDFS-10519.003.patch, HDFS-10519.004.patch, HDFS-10519.005.patch, 
> HDFS-10519.006.patch, HDFS-10519.007.patch, HDFS-10519.008.patch, 
> HDFS-10519.009.patch, HDFS-10519.010.patch, HDFS-10519.011.patch, 
> HDFS-10519.012.patch, HDFS-10519.013.patch
>
>
> Standby Namenode has the option to do in-progress edit log tailing to improve 
> the data freshness. In-progress tailing is already implemented, but it's not 
> enabled as default configuration. And there's no related configuration key to 
> turn it on.
> Adding a related configuration key to let Standby Namenode is reasonable and 
> would be a basis for further improvement on Standby Namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10519) Add a configuration option to enable in-progress edit log tailing

2016-07-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10519:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha1
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-3.0.0-alpha1. Thanks for the contribution Jiayi!

> Add a configuration option to enable in-progress edit log tailing
> -
>
> Key: HDFS-10519
> URL: https://issues.apache.org/jira/browse/HDFS-10519
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Fix For: 3.0.0-alpha1
>
> Attachments: HDFS-10519.001.patch, HDFS-10519.002.patch, 
> HDFS-10519.003.patch, HDFS-10519.004.patch, HDFS-10519.005.patch, 
> HDFS-10519.006.patch, HDFS-10519.007.patch, HDFS-10519.008.patch, 
> HDFS-10519.009.patch, HDFS-10519.010.patch, HDFS-10519.011.patch, 
> HDFS-10519.012.patch, HDFS-10519.013.patch
>
>
> Standby Namenode has the option to do in-progress edit log tailing to improve 
> the data freshness. In-progress tailing is already implemented, but it's not 
> enabled as default configuration. And there's no related configuration key to 
> turn it on.
> Adding a related configuration key to let Standby Namenode is reasonable and 
> would be a basis for further improvement on Standby Namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10519) Add a configuration option to enable in-progress edit log tailing

2016-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396734#comment-15396734
 ] 

Hadoop QA commented on HDFS-10519:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
6s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m  3s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
32s{color} | {color:green} bkjournal in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks 
|
|   | hadoop.cli.TestHDFSCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820588/HDFS-10519.013.patch |
| JIRA Issue | HDFS-10519 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux 216c152974d2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b43de80 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16222/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Resul

[jira] [Commented] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396728#comment-15396728
 ] 

Hadoop QA commented on HDFS-10676:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 66 unchanged - 0 fixed = 70 total (was 66) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
|   | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.cli.TestHDFSCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820584/HDFS-10676.004.patch |
| JIRA Issue | HDFS-10676 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 12ae203d6e08 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b43de80 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16221/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16221/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16221/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16221/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add namenode metric to measure time spent in generating EDEKs
> 

[jira] [Updated] (HDFS-10689) Hdfs dfs chmod should reset sticky bit permission when the bit is omitted in the octal mode

2016-07-27 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-10689:
--
Summary: Hdfs dfs chmod should reset sticky bit permission when the bit is 
omitted in the octal mode  (was: "hdfs dfs -chmod 777" does not remove sticky 
bit)

> Hdfs dfs chmod should reset sticky bit permission when the bit is omitted in 
> the octal mode
> ---
>
> Key: HDFS-10689
> URL: https://issues.apache.org/jira/browse/HDFS-10689
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Attachments: HDFS-10689.001.patch, HDFS-10689.002.patch, 
> HDFS-10689.003.patch
>
>
> When a directory permission is modified using hdfs dfs chmod command and when 
> octal/numeric format is used, the leading sticky bit is not fully honored.
> 1. Create a dir dir_test_with_sticky_bit
> 2. Apply sticky bit permission on the dir : hdfs dfs -chmod 1755 
> /dir_test_with_sticky_bit
> 3. Remove sticky bit permission on the dir: hdfs dfs -chmod 755 
> /dir_test_with_sticky_bit
> Expected: Remove the sticky bit on the dir, as it happens on Mac/Linux native 
> filesystem with native chmod.
> 4. However, removing sticky bit permission by explicitly turning off the bit 
> works. hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> {noformat}
> manoj@~/work/hadev-pp: hdfs dfs -chmod 1755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit  <=== sticky bit still intact
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396705#comment-15396705
 ] 

Hadoop QA commented on HDFS-4176:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 424 unchanged - 2 fixed = 424 total (was 426) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.cli.TestHDFSCLI |
|   | hadoop.hdfs.server.datanode.TestDataNodeLifeline |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820581/HDFS-4176.03.patch |
| JIRA Issue | HDFS-4176 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 1597bf5fcd77 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / eb7ff0c |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16220/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16220/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16220/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> EditLogTailer should call rollEdits with a timeout

[jira] [Commented] (HDFS-9259) Make SO_SNDBUF size configurable at DFSClient side for hdfs write scenario

2016-07-27 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396704#comment-15396704
 ] 

Arpit Agarwal commented on HDFS-9259:
-

Thanks for the quick review Mingliang.

> Make SO_SNDBUF size configurable at DFSClient side for hdfs write scenario
> --
>
> Key: HDFS-9259
> URL: https://issues.apache.org/jira/browse/HDFS-9259
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9259.000.patch, HDFS-9259.001.patch
>
>
> We recently found that cross-DC hdfs write could be really slow. Further 
> investigation identified that is due to SendBufferSize and ReceiveBufferSize 
> used for hdfs write. The test ran "hadoop -fs -copyFromLocal" of a 256MB file 
> across DC with different SendBufferSize and ReceiveBufferSize values. The 
> results showed that c much faster than b; b is faster than a.
> a. SendBufferSize=128k, ReceiveBufferSize=128k (hdfs default setting).
> b. SendBufferSize=128K, ReceiveBufferSize=not set(TCP auto tuning).
> c. SendBufferSize=not set, ReceiveBufferSize=not set(TCP auto tuning for both)
> HDFS-8829 has enabled scenario b. We would like to enable scenario c by 
> making SendBufferSize configurable at DFSClient side. Cc: [~cmccabe] [~He 
> Tianyi] [~kanaka] [~vinayrpet].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396700#comment-15396700
 ] 

Hadoop QA commented on HDFS-4176:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 424 unchanged - 2 fixed = 424 total (was 426) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.cli.TestHDFSCLI |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.TestEditLog |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820576/HDFS-4176.02.patch |
| JIRA Issue | HDFS-4176 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux aa44a1be660d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / eb7ff0c |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16219/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16219/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16219/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> EditLogTailer should call rollEdits with a timeout
> --

[jira] [Updated] (HDFS-10689) "hdfs dfs -chmod 777" does not remove sticky bit

2016-07-27 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-10689:
--
Attachment: HDFS-10689.003.patch

Fixed checkstyle issues.

> "hdfs dfs -chmod 777" does not remove sticky bit
> 
>
> Key: HDFS-10689
> URL: https://issues.apache.org/jira/browse/HDFS-10689
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Attachments: HDFS-10689.001.patch, HDFS-10689.002.patch, 
> HDFS-10689.003.patch
>
>
> When a directory permission is modified using hdfs dfs chmod command and when 
> octal/numeric format is used, the leading sticky bit is not fully honored.
> 1. Create a dir dir_test_with_sticky_bit
> 2. Apply sticky bit permission on the dir : hdfs dfs -chmod 1755 
> /dir_test_with_sticky_bit
> 3. Remove sticky bit permission on the dir: hdfs dfs -chmod 755 
> /dir_test_with_sticky_bit
> Expected: Remove the sticky bit on the dir, as it happens on Mac/Linux native 
> filesystem with native chmod.
> 4. However, removing sticky bit permission by explicitly turning off the bit 
> works. hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> {noformat}
> manoj@~/work/hadev-pp: hdfs dfs -chmod 1755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit  <=== sticky bit still intact
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9259) Make SO_SNDBUF size configurable at DFSClient side for hdfs write scenario

2016-07-27 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396693#comment-15396693
 ] 

Mingliang Liu commented on HDFS-9259:
-

Thanks [~arpitagarwal] for adding release note. It looks pretty good to me.

> Make SO_SNDBUF size configurable at DFSClient side for hdfs write scenario
> --
>
> Key: HDFS-9259
> URL: https://issues.apache.org/jira/browse/HDFS-9259
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9259.000.patch, HDFS-9259.001.patch
>
>
> We recently found that cross-DC hdfs write could be really slow. Further 
> investigation identified that is due to SendBufferSize and ReceiveBufferSize 
> used for hdfs write. The test ran "hadoop -fs -copyFromLocal" of a 256MB file 
> across DC with different SendBufferSize and ReceiveBufferSize values. The 
> results showed that c much faster than b; b is faster than a.
> a. SendBufferSize=128k, ReceiveBufferSize=128k (hdfs default setting).
> b. SendBufferSize=128K, ReceiveBufferSize=not set(TCP auto tuning).
> c. SendBufferSize=not set, ReceiveBufferSize=not set(TCP auto tuning for both)
> HDFS-8829 has enabled scenario b. We would like to enable scenario c by 
> making SendBufferSize configurable at DFSClient side. Cc: [~cmccabe] [~He 
> Tianyi] [~kanaka] [~vinayrpet].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10324) Trash directory in an encryption zone should be pre-created with correct permissions

2016-07-27 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396688#comment-15396688
 ] 

Arpit Agarwal commented on HDFS-10324:
--

Thank you for the quick review and update!

> Trash directory in an encryption zone should be pre-created with correct 
> permissions
> 
>
> Key: HDFS-10324
> URL: https://issues.apache.org/jira/browse/HDFS-10324
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.8.0
> Environment: CDH5.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 2.8.0
>
> Attachments: HDFS-10324.001.patch, HDFS-10324.002.patch, 
> HDFS-10324.003.patch, HDFS-10324.004.patch, HDFS-10324.005.patch, 
> HDFS-10324.006.patch, HDFS-10324.007.patch, HDFS-10324.008.patch
>
>
> We encountered a bug in HDFS-8831:
> After HDFS-8831, a deleted file in an encryption zone is moved to a .Trash 
> subdirectory within the encryption zone.
> However, if this .Trash subdirectory is not created beforehand, it will be 
> created and owned by the first user who deleted a file, with permission 
> drwx--. This creates a serious bug because any other non-privileged user 
> will not be able to delete any files within the encryption zone, because they 
> do not have the permission to move directories to the trash directory.
> We should fix this bug, by pre-creating the .Trash directory with sticky bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9259) Make SO_SNDBUF size configurable at DFSClient side for hdfs write scenario

2016-07-27 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-9259:

Release Note: Introduces a new configuration setting 
dfs.client.socket.send.buffer.size to control the socket send buffer size for 
writes. Setting it to zero enables TCP auto-tuning on systems that support it.

> Make SO_SNDBUF size configurable at DFSClient side for hdfs write scenario
> --
>
> Key: HDFS-9259
> URL: https://issues.apache.org/jira/browse/HDFS-9259
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HDFS-9259.000.patch, HDFS-9259.001.patch
>
>
> We recently found that cross-DC hdfs write could be really slow. Further 
> investigation identified that is due to SendBufferSize and ReceiveBufferSize 
> used for hdfs write. The test ran "hadoop -fs -copyFromLocal" of a 256MB file 
> across DC with different SendBufferSize and ReceiveBufferSize values. The 
> results showed that c much faster than b; b is faster than a.
> a. SendBufferSize=128k, ReceiveBufferSize=128k (hdfs default setting).
> b. SendBufferSize=128K, ReceiveBufferSize=not set(TCP auto tuning).
> c. SendBufferSize=not set, ReceiveBufferSize=not set(TCP auto tuning for both)
> HDFS-8829 has enabled scenario b. We would like to enable scenario c by 
> making SendBufferSize configurable at DFSClient side. Cc: [~cmccabe] [~He 
> Tianyi] [~kanaka] [~vinayrpet].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10324) Trash directory in an encryption zone should be pre-created with correct permissions

2016-07-27 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396682#comment-15396682
 ] 

Wei-Chiu Chuang commented on HDFS-10324:


Thanks [~arpitagarwal] for the great summary. I made a minor change to the 
release note based on your summary.

> Trash directory in an encryption zone should be pre-created with correct 
> permissions
> 
>
> Key: HDFS-10324
> URL: https://issues.apache.org/jira/browse/HDFS-10324
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.8.0
> Environment: CDH5.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 2.8.0
>
> Attachments: HDFS-10324.001.patch, HDFS-10324.002.patch, 
> HDFS-10324.003.patch, HDFS-10324.004.patch, HDFS-10324.005.patch, 
> HDFS-10324.006.patch, HDFS-10324.007.patch, HDFS-10324.008.patch
>
>
> We encountered a bug in HDFS-8831:
> After HDFS-8831, a deleted file in an encryption zone is moved to a .Trash 
> subdirectory within the encryption zone.
> However, if this .Trash subdirectory is not created beforehand, it will be 
> created and owned by the first user who deleted a file, with permission 
> drwx--. This creates a serious bug because any other non-privileged user 
> will not be able to delete any files within the encryption zone, because they 
> do not have the permission to move directories to the trash directory.
> We should fix this bug, by pre-creating the .Trash directory with sticky bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10324) Trash directory in an encryption zone should be pre-created with correct permissions

2016-07-27 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-10324:
---
Release Note: HDFS will create a ".Trash" subdirectory when creating a new 
encryption zone to support soft delete for files deleted within the encryption 
zone. A new "dfsadmin -provisionTrash" command has been introduced to provision 
trash directories for encryption zones created with Apache Hadoop minor 
releases prior to 2.8.0.  (was: HDFS will create a ".Trash" directory when 
creating a new encryption zone for files deleted within the encryption zone. A 
new "dfsadmin -provisionTrash" command has been introduced to provision trash 
directories for encryption zones created with Apache Hadoop minor releases 
prior to 2.8.0.)

> Trash directory in an encryption zone should be pre-created with correct 
> permissions
> 
>
> Key: HDFS-10324
> URL: https://issues.apache.org/jira/browse/HDFS-10324
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.8.0
> Environment: CDH5.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 2.8.0
>
> Attachments: HDFS-10324.001.patch, HDFS-10324.002.patch, 
> HDFS-10324.003.patch, HDFS-10324.004.patch, HDFS-10324.005.patch, 
> HDFS-10324.006.patch, HDFS-10324.007.patch, HDFS-10324.008.patch
>
>
> We encountered a bug in HDFS-8831:
> After HDFS-8831, a deleted file in an encryption zone is moved to a .Trash 
> subdirectory within the encryption zone.
> However, if this .Trash subdirectory is not created beforehand, it will be 
> created and owned by the first user who deleted a file, with permission 
> drwx--. This creates a serious bug because any other non-privileged user 
> will not be able to delete any files within the encryption zone, because they 
> do not have the permission to move directories to the trash directory.
> We should fix this bug, by pre-creating the .Trash directory with sticky bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9365) Balancer does not work with the HDFS-6376 HA setup

2016-07-27 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-9365:

Summary: Balancer does not work with the HDFS-6376 HA setup  (was: Balaner 
does not work with the HDFS-6376 HA setup)

> Balancer does not work with the HDFS-6376 HA setup
> --
>
> Key: HDFS-9365
> URL: https://issues.apache.org/jira/browse/HDFS-9365
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.7.3
>
> Attachments: h9365_20151119.patch, h9365_20151120.patch, 
> h9365_20160523.patch
>
>
> HDFS-6376 added support for DistCp between two HA clusters.  After the 
> change, Balaner will use all the NN from both the local and the remote 
> clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10689) "hdfs dfs -chmod 777" does not remove sticky bit

2016-07-27 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-10689:
--
Attachment: HDFS-10689.002.patch

Thanks for the review Lei Eddy.

1. I looked at few methods that are in the code path of chmod and cleaned them 
up as well. The changes in {{FsShellPermission.java}}. {{ChmodParser.java}}, 
{{UmaskParsser.java}} include usage of proper camelCase variable names, removal 
of unwanted lines, typos etc., But for now reverted these changes to make the 
patch more focussed.
2. Yes, {{PermissionParser.java#73}} is formatting only. No change in Matcher 
Groups. For now, reverted this change.
3. In PermissionParser#applyOctalPattern: we need to assign '=' to all type as 
this is apply of new permission and all modes gets a new assignment.
4. Cleaned up imports in TestStickyBit.java. I am using Intellij's organize 
imports tool and it reorganized a little more now like arranging them 
chronologically. Hope its ok. 
5. Removed the tear down section and the cluster restart in 
{{TestStickyBit.java}}
6. Additionally, made few more changes to suffice checkstyle . 

Attached the v002 patch with review comments incorporated. Please take a look.

> "hdfs dfs -chmod 777" does not remove sticky bit
> 
>
> Key: HDFS-10689
> URL: https://issues.apache.org/jira/browse/HDFS-10689
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Attachments: HDFS-10689.001.patch, HDFS-10689.002.patch
>
>
> When a directory permission is modified using hdfs dfs chmod command and when 
> octal/numeric format is used, the leading sticky bit is not fully honored.
> 1. Create a dir dir_test_with_sticky_bit
> 2. Apply sticky bit permission on the dir : hdfs dfs -chmod 1755 
> /dir_test_with_sticky_bit
> 3. Remove sticky bit permission on the dir: hdfs dfs -chmod 755 
> /dir_test_with_sticky_bit
> Expected: Remove the sticky bit on the dir, as it happens on Mac/Linux native 
> filesystem with native chmod.
> 4. However, removing sticky bit permission by explicitly turning off the bit 
> works. hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> {noformat}
> manoj@~/work/hadev-pp: hdfs dfs -chmod 1755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit  <=== sticky bit still intact
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10324) Trash directory in an encryption zone should be pre-created with correct permissions

2016-07-27 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1539#comment-1539
 ] 

Arpit Agarwal commented on HDFS-10324:
--

Added a release note. [~jojochuang] can you please review it for correctness?

> Trash directory in an encryption zone should be pre-created with correct 
> permissions
> 
>
> Key: HDFS-10324
> URL: https://issues.apache.org/jira/browse/HDFS-10324
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.8.0
> Environment: CDH5.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 2.8.0
>
> Attachments: HDFS-10324.001.patch, HDFS-10324.002.patch, 
> HDFS-10324.003.patch, HDFS-10324.004.patch, HDFS-10324.005.patch, 
> HDFS-10324.006.patch, HDFS-10324.007.patch, HDFS-10324.008.patch
>
>
> We encountered a bug in HDFS-8831:
> After HDFS-8831, a deleted file in an encryption zone is moved to a .Trash 
> subdirectory within the encryption zone.
> However, if this .Trash subdirectory is not created beforehand, it will be 
> created and owned by the first user who deleted a file, with permission 
> drwx--. This creates a serious bug because any other non-privileged user 
> will not be able to delete any files within the encryption zone, because they 
> do not have the permission to move directories to the trash directory.
> We should fix this bug, by pre-creating the .Trash directory with sticky bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10324) Trash directory in an encryption zone should be pre-created with correct permissions

2016-07-27 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10324:
-
Release Note: HDFS will create a ".Trash" directory when creating a new 
encryption zone for files deleted within the encryption zone. A new "dfsadmin 
-provisionTrash" command has been introduced to provision trash directories for 
encryption zones created with Apache Hadoop minor releases prior to 2.8.0.

> Trash directory in an encryption zone should be pre-created with correct 
> permissions
> 
>
> Key: HDFS-10324
> URL: https://issues.apache.org/jira/browse/HDFS-10324
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.8.0
> Environment: CDH5.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 2.8.0
>
> Attachments: HDFS-10324.001.patch, HDFS-10324.002.patch, 
> HDFS-10324.003.patch, HDFS-10324.004.patch, HDFS-10324.005.patch, 
> HDFS-10324.006.patch, HDFS-10324.007.patch, HDFS-10324.008.patch
>
>
> We encountered a bug in HDFS-8831:
> After HDFS-8831, a deleted file in an encryption zone is moved to a .Trash 
> subdirectory within the encryption zone.
> However, if this .Trash subdirectory is not created beforehand, it will be 
> created and owned by the first user who deleted a file, with permission 
> drwx--. This creates a serious bug because any other non-privileged user 
> will not be able to delete any files within the encryption zone, because they 
> do not have the permission to move directories to the trash directory.
> We should fix this bug, by pre-creating the .Trash directory with sticky bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10457) DataNode should not auto-format block pool directory if VERSION is missing

2016-07-27 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-10457:
---
Status: Patch Available  (was: Open)

> DataNode should not auto-format block pool directory if VERSION is missing
> --
>
> Key: HDFS-10457
> URL: https://issues.apache.org/jira/browse/HDFS-10457
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10457.001.patch
>
>
> HDFS-10360 prevents DN to auto-formats a volume directory if the 
> current/VERSION is missing. However, if instead, the current/VERSION in a 
> block pool directory is missing, DN still auto-formats the directory.
> Filing this jira to fix the bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-27 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-10676:
--
Status: Patch Available  (was: Open)

> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch, 
> HDFS-10676.005.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-27 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-10676:
--
Status: Open  (was: Patch Available)

Adding a small change. Removing unnecessary variables.

> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch, 
> HDFS-10676.005.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-27 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-10676:
--
Attachment: HDFS-10676.005.patch

> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch, 
> HDFS-10676.005.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8831) Trash Support for deletion in HDFS encryption zone

2016-07-27 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396646#comment-15396646
 ] 

Arpit Agarwal commented on HDFS-8831:
-

Made minor edits to the release note. [~xyao], [~zhz], do you think anything in 
the HDFS-8831 release note is invalidated by later fixes to EZ trash? (I did 
not follow the subsequent patches very closely).

> Trash Support for deletion in HDFS encryption zone
> --
>
> Key: HDFS-8831
> URL: https://issues.apache.org/jira/browse/HDFS-8831
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0
>
> Attachments: HDFS-8831-10152015.pdf, HDFS-8831.00.patch, 
> HDFS-8831.01.patch, HDFS-8831.02.patch, HDFS-8831.03.patch, 
> HDFS-8831.04.patch, HDFS-8831.05.patch
>
>
> Currently, "Soft Delete" is only supported if the whole encryption zone is 
> deleted. If you delete files whinin the zone with trash feature enabled, you 
> will get error similar to the following 
> {code}
> rm: Failed to move to trash: hdfs://HW11217.local:9000/z1_1/startnn.sh: 
> /z1_1/startnn.sh can't be moved from an encryption zone.
> {code}
> This JIRA is proposed to support trash for deletion of files within 
> encryption zone. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8831) Trash Support for deletion in HDFS encryption zone

2016-07-27 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-8831:

Release Note: Add Trash support for deleting files within encryption zones. 
Deleted files will remain encrypted and they will be moved to a “.Trash” 
subdirectory under the root of the encryption zone, prefixed by $USER/current. 
Checkpoint and expunge continue to work like the existing Trash.  (was: Trash 
is now supported for deletion of files within encryption zone after HDFS-8831. 
The deleted encrypted files will remain encrypted and be moved to .Trash 
subdirectory under the root of the encryption zone prefixed by $USER/current 
with checkpoint and expunge working similar to existing Trash.)

> Trash Support for deletion in HDFS encryption zone
> --
>
> Key: HDFS-8831
> URL: https://issues.apache.org/jira/browse/HDFS-8831
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: encryption
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Fix For: 2.8.0
>
> Attachments: HDFS-8831-10152015.pdf, HDFS-8831.00.patch, 
> HDFS-8831.01.patch, HDFS-8831.02.patch, HDFS-8831.03.patch, 
> HDFS-8831.04.patch, HDFS-8831.05.patch
>
>
> Currently, "Soft Delete" is only supported if the whole encryption zone is 
> deleted. If you delete files whinin the zone with trash feature enabled, you 
> will get error similar to the following 
> {code}
> rm: Failed to move to trash: hdfs://HW11217.local:9000/z1_1/startnn.sh: 
> /z1_1/startnn.sh can't be moved from an encryption zone.
> {code}
> This JIRA is proposed to support trash for deletion of files within 
> encryption zone. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-27 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396639#comment-15396639
 ] 

Xiaoyu Yao commented on HDFS-10676:
---

Thanks [~hanishakoneru] for updating the patch. The {{src}} string within the 
loop can be removed as it is not being used. 
LGTM otherwise. 

{code}
  String src = "/zones/zone1/testfile-" + i;
  Path filePath = new Path("/zones/zone1/testfile-" + i);
{code}

> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10519) Add a configuration option to enable in-progress edit log tailing

2016-07-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396635#comment-15396635
 ] 

Andrew Wang commented on HDFS-10519:


LGTM, thanks Jiayi! +1 pending Jenkins.

> Add a configuration option to enable in-progress edit log tailing
> -
>
> Key: HDFS-10519
> URL: https://issues.apache.org/jira/browse/HDFS-10519
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10519.001.patch, HDFS-10519.002.patch, 
> HDFS-10519.003.patch, HDFS-10519.004.patch, HDFS-10519.005.patch, 
> HDFS-10519.006.patch, HDFS-10519.007.patch, HDFS-10519.008.patch, 
> HDFS-10519.009.patch, HDFS-10519.010.patch, HDFS-10519.011.patch, 
> HDFS-10519.012.patch, HDFS-10519.013.patch
>
>
> Standby Namenode has the option to do in-progress edit log tailing to improve 
> the data freshness. In-progress tailing is already implemented, but it's not 
> enabled as default configuration. And there's no related configuration key to 
> turn it on.
> Adding a related configuration key to let Standby Namenode is reasonable and 
> would be a basis for further improvement on Standby Namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10519) Add a configuration option to enable in-progress edit log tailing

2016-07-27 Thread Jiayi Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayi Zhou updated HDFS-10519:
--
Attachment: HDFS-10519.013.patch

Undo some whitespace changes and update test strings instead of adding a new 
method.

> Add a configuration option to enable in-progress edit log tailing
> -
>
> Key: HDFS-10519
> URL: https://issues.apache.org/jira/browse/HDFS-10519
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10519.001.patch, HDFS-10519.002.patch, 
> HDFS-10519.003.patch, HDFS-10519.004.patch, HDFS-10519.005.patch, 
> HDFS-10519.006.patch, HDFS-10519.007.patch, HDFS-10519.008.patch, 
> HDFS-10519.009.patch, HDFS-10519.010.patch, HDFS-10519.011.patch, 
> HDFS-10519.012.patch, HDFS-10519.013.patch
>
>
> Standby Namenode has the option to do in-progress edit log tailing to improve 
> the data freshness. In-progress tailing is already implemented, but it's not 
> enabled as default configuration. And there's no related configuration key to 
> turn it on.
> Adding a related configuration key to let Standby Namenode is reasonable and 
> would be a basis for further improvement on Standby Namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8224) Any IOException in DataTransfer#run() will run diskError thread even if it is not disk error

2016-07-27 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396598#comment-15396598
 ] 

Wei-Chiu Chuang commented on HDFS-8224:
---

Hi [~shahrs87] thanks for bring up this discussion. What would be the relation 
between this proposal versus HDFS-10627? In its current form, if 
{{blockSender.sendPacket()}} gets a Connection Reset or Broken pipe, the block 
is added to the scanning queue of VolumeScanner. So that would mean the block 
is scanned twice if this case happens. Or we could move that piece of code to 
the catch block of {{DataTransfer#run()}} to add the block into scanning queue 
there.

> Any IOException in DataTransfer#run() will run diskError thread even if it is 
> not disk error
> 
>
> Key: HDFS-8224
> URL: https://issues.apache.org/jira/browse/HDFS-8224
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Fix For: 2.8.0
>
>
> This happened in our 2.6 cluster.
> One of the block and its metadata file were corrupted.
> The disk was healthy in this case.
> Only the block was corrupt.
> Namenode tried to copy that block to another datanode but failed with the 
> following stack trace:
> 2015-04-20 01:04:04,421 
> [org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@11319bc4] WARN 
> datanode.DataNode: DatanodeRegistration(a.b.c.d, 
> datanodeUuid=e8c5135c-9b9f-4d05-a59d-e5525518aca7, infoPort=1006, 
> infoSecurePort=0, ipcPort=8020, 
> storageInfo=lv=-56;cid=CID-e7f736ac-158e-446e-9091-7e66f3cddf3c;nsid=358250775;c=1428471998571):Failed
>  to transfer BP-xxx-1351096255769:blk_2697560713_1107108863999 to 
> a1.b1.c1.d1:1004 got 
> java.io.IOException: Could not create DataChecksum of type 0 with 
> bytesPerChecksum 0
> at 
> org.apache.hadoop.util.DataChecksum.newDataChecksum(DataChecksum.java:125)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:175)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:140)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readDataChecksum(BlockMetadataHeader.java:102)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:287)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1989)
> at java.lang.Thread.run(Thread.java:722)
> The following catch block in DataTransfer#run method will treat every 
> IOException as disk error fault and run disk errror
> {noformat}
> catch (IOException ie) {
> LOG.warn(bpReg + ":Failed to transfer " + b + " to " +
> targets[0] + " got ", ie);
> // check if there are any disk problem
> checkDiskErrorAsync();
>   } 
> {noformat}
> This block was never scanned by BlockPoolSliceScanner otherwise it would have 
> reported as corrupt block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10667) Report more accurate info about data corruption location

2016-07-27 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-10667:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha1
   2.8.0
   Status: Resolved  (was: Patch Available)

I have committed to trunk, branch-3.0.0-alpha1, branch-2, branch-2.8.

Thanks [~yuanbo] for the contribution, and [~vinayrpet] for the review.



> Report more accurate info about data corruption location
> 
>
> Key: HDFS-10667
> URL: https://issues.apache.org/jira/browse/HDFS-10667
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
>Assignee: Yuanbo Liu
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10667.001.patch, HDFS-10667.002.patch, 
> HDFS-10667.003.patch, HDFS-10667.004.patch, HDFS-10667.005.patch
>
>
> Per 
> https://issues.apache.org/jira/browse/HDFS-10587?focusedCommentId=15376897&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15376897
> 129.77 report:
> {code}
> 2016-07-13 11:49:01,512 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Receiving blk_1116167880_42906656 src: /10.6.134.229:43844 dest: 
> /10.6.129.77:5080
> 2016-07-13 11:49:01,543 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Checksum error in block blk_1116167880_42906656 from /10.6.134.229:43844
> org.apache.hadoop.fs.ChecksumException: Checksum error: 
> DFSClient_NONMAPREDUCE_2019484565_1 at 81920 exp: 1352119728 got: -1012279895
> at 
> org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(Native 
> Method)
> at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSumsByteArray(NativeCrc32.java:69)
> at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:347)
> at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:294)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.verifyChunks(BlockReceiver.java:421)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:558)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:789)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:917)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:174)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:80)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:244)
> at java.lang.Thread.run(Thread.java:745)
> 2016-07-13 11:49:01,543 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Exception for blk_1116167880_42906656
> java.io.IOException: Terminating due to a checksum error.java.io.IOException: 
> Unexpected checksum mismatch while writing blk_1116167880_42906656 from 
> /10.6.134.229:43844
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:571)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:789)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:917)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:174)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:80)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:244)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> and
> https://issues.apache.org/jira/browse/HDFS-10587?focusedCommentId=15378879&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15378879
> {quote}
> While verifying only packet, the position mentioned in the checksum 
> exception, is relative to packet buffer offset, not the block offset. So 
> 81920 is the offset in the exception.
> {quote}
> Create this jira to report more accurate corruption location information: the 
> offset in the file, offset in block, and offset in packet.
> See 
> https://issues.apache.org/jira/browse/HDFS-10587?focusedCommentId=15387083&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15387083



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-27 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396585#comment-15396585
 ] 

Hanisha Koneru commented on HDFS-10676:
---

Thank you [~xyao] for the review and feedback. I have updated and submitted a 
new patch with the required changes.

> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-27 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-10676:
--
Attachment: HDFS-10676.004.patch

> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-27 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-10676:
--
Status: In Progress  (was: Patch Available)

> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-27 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-10676:
--
Status: Patch Available  (was: In Progress)

> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch, HDFS-10676.004.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10689) "hdfs dfs -chmod 777" does not remove sticky bit

2016-07-27 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396568#comment-15396568
 ] 

Manoj Govindassamy commented on HDFS-10689:
---

Thanks for the inputs [~cnauroth]. 
[~andrew.wang], sure will change the summary and add a release notes. 

> "hdfs dfs -chmod 777" does not remove sticky bit
> 
>
> Key: HDFS-10689
> URL: https://issues.apache.org/jira/browse/HDFS-10689
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Attachments: HDFS-10689.001.patch
>
>
> When a directory permission is modified using hdfs dfs chmod command and when 
> octal/numeric format is used, the leading sticky bit is not fully honored.
> 1. Create a dir dir_test_with_sticky_bit
> 2. Apply sticky bit permission on the dir : hdfs dfs -chmod 1755 
> /dir_test_with_sticky_bit
> 3. Remove sticky bit permission on the dir: hdfs dfs -chmod 755 
> /dir_test_with_sticky_bit
> Expected: Remove the sticky bit on the dir, as it happens on Mac/Linux native 
> filesystem with native chmod.
> 4. However, removing sticky bit permission by explicitly turning off the bit 
> works. hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> {noformat}
> manoj@~/work/hadev-pp: hdfs dfs -chmod 1755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit  <=== sticky bit still intact
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-27 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-4176:

Attachment: HDFS-4176.03.patch

Thanks, [~jingzhao].  

Updated the patch to use {{ThreadFactoryBuilder}}.



> EditLogTailer should call rollEdits with a timeout
> --
>
> Key: HDFS-4176
> URL: https://issues.apache.org/jira/browse/HDFS-4176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 2.0.2-alpha, 3.0.0-alpha1
>Reporter: Todd Lipcon
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-4176.00.patch, HDFS-4176.01.patch, 
> HDFS-4176.02.patch, HDFS-4176.03.patch, namenode.jstack4
>
>
> When the EditLogTailer thread calls rollEdits() on the active NN via RPC, it 
> currently does so without a timeout. So, if the active NN has frozen (but not 
> actually crashed), this call can hang forever. This can then potentially 
> prevent the standby from becoming active.
> This may actually considered a side effect of HADOOP-6762 -- if the RPC were 
> interruptible, that would also fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-27 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396547#comment-15396547
 ] 

Jing Zhao commented on HDFS-4176:
-

Thanks for addressing the comments, [~eddyxu]

Nit: We can use ThreadFactoryBuilder to simplify the following code. Other than 
this the patch looks good to me.
{code}
177 rollEditsRpcExecutor = Executors.newSingleThreadExecutor(
178 new ThreadFactory() {
179   @Override
180   public Thread newThread(Runnable r) {
181 Thread thread = 
Executors.defaultThreadFactory().newThread(r);
182 thread.setDaemon(true);
183 return thread;
184   }
185 });
{code}

> EditLogTailer should call rollEdits with a timeout
> --
>
> Key: HDFS-4176
> URL: https://issues.apache.org/jira/browse/HDFS-4176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 2.0.2-alpha, 3.0.0-alpha1
>Reporter: Todd Lipcon
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-4176.00.patch, HDFS-4176.01.patch, 
> HDFS-4176.02.patch, namenode.jstack4
>
>
> When the EditLogTailer thread calls rollEdits() on the active NN via RPC, it 
> currently does so without a timeout. So, if the active NN has frozen (but not 
> actually crashed), this call can hang forever. This can then potentially 
> prevent the standby from becoming active.
> This may actually considered a side effect of HADOOP-6762 -- if the RPC were 
> interruptible, that would also fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10689) "hdfs dfs -chmod 777" does not remove sticky bit

2016-07-27 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10689:
---
Affects Version/s: 2.6.4
 Target Version/s: 3.0.0-alpha1
 Hadoop Flags: Incompatible change
  Description: 
When a directory permission is modified using hdfs dfs chmod command and when 
octal/numeric format is used, the leading sticky bit is not fully honored.

1. Create a dir dir_test_with_sticky_bit
2. Apply sticky bit permission on the dir : hdfs dfs -chmod 1755 
/dir_test_with_sticky_bit
3. Remove sticky bit permission on the dir: hdfs dfs -chmod 755 
/dir_test_with_sticky_bit

Expected: Remove the sticky bit on the dir, as it happens on Mac/Linux native 
filesystem with native chmod.

4. However, removing sticky bit permission by explicitly turning off the bit 
works. hdfs dfs -chmod 0755 /dir_test_with_sticky_bit

{noformat}
manoj@~/work/hadev-pp: hdfs dfs -chmod 1755 /dir_test_with_sticky_bit
manoj@~/work/hadev-pp: hdfs dfs -ls /
Found 2 items
drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
/dir_test_with_sticky_bit
drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user

manoj@~/work/hadev-pp: hdfs dfs -chmod 755 /dir_test_with_sticky_bit
manoj@~/work/hadev-pp: hdfs dfs -ls /
Found 2 items
drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
/dir_test_with_sticky_bit  <=== sticky bit still intact
drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user

manoj@~/work/hadev-pp: hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
manoj@~/work/hadev-pp: hdfs dfs -ls /
Found 2 items
drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 
/dir_test_with_sticky_bit
drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
manoj@~/work/hadev-pp: 
{noformat}




  was:

When a directory permission is modified using hdfs dfs chmod command and when 
octal/numeric format is used, the leading sticky bit is not fully honored.

1. Create a dir dir_test_with_sticky_bit
2. Apply sticky bit permission on the dir : hdfs dfs -chmod 1755 
/dir_test_with_sticky_bit
3. Remove sticky bit permission on the dir: hdfs dfs -chmod 755 
/dir_test_with_sticky_bit

Expected: Remove the sticky bit on the dir, as it happens on Mac/Linux native 
filesystem with native chmod.

4. However, removing sticky bit permission by explicitly turning off the bit 
works. hdfs dfs -chmod 0755 /dir_test_with_sticky_bit

{noformat}
manoj@~/work/hadev-pp: hdfs dfs -chmod 1755 /dir_test_with_sticky_bit
manoj@~/work/hadev-pp: hdfs dfs -ls /
Found 2 items
drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
/dir_test_with_sticky_bit
drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user

manoj@~/work/hadev-pp: hdfs dfs -chmod 755 /dir_test_with_sticky_bit
manoj@~/work/hadev-pp: hdfs dfs -ls /
Found 2 items
drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
/dir_test_with_sticky_bit  <=== sticky bit still intact
drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user

manoj@~/work/hadev-pp: hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
manoj@~/work/hadev-pp: hdfs dfs -ls /
Found 2 items
drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 
/dir_test_with_sticky_bit
drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
manoj@~/work/hadev-pp: 
{noformat}





Got it, thanks Chris. Updating the JIRA fields as appropriate.

Manoj, do you mind adding a little release note about how this change will 
affect end users? Could also consider changing the summary to say what this 
patch will do, rather than just what is broken.

> "hdfs dfs -chmod 777" does not remove sticky bit
> 
>
> Key: HDFS-10689
> URL: https://issues.apache.org/jira/browse/HDFS-10689
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Attachments: HDFS-10689.001.patch
>
>
> When a directory permission is modified using hdfs dfs chmod command and when 
> octal/numeric format is used, the leading sticky bit is not fully honored.
> 1. Create a dir dir_test_with_sticky_bit
> 2. Apply sticky bit permission on the dir : hdfs dfs -chmod 1755 
> /dir_test_with_sticky_bit
> 3. Remove sticky bit permission on the dir: hdfs dfs -chmod 755 
> /dir_test_with_sticky_bit
> Expected: Remove the sticky bit on the dir, as it happens on Mac/Linux native 
> filesystem with native chmod.
> 4. However, removing sticky bit permission by explicitly turning off the bit 
> works. hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> {noformat}
> manoj@~/work/hadev-pp: hdfs dfs -chmod 1755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_te

[jira] [Updated] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-27 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-4176:

Attachment: HDFS-4176.02.patch

Thanks a lot for the great inputs, [~jingzhao]

I have updated the patch to address all your comments. 

The test failures are not relevant. {{TestDFSCLI}} fails on trunk as well.

> EditLogTailer should call rollEdits with a timeout
> --
>
> Key: HDFS-4176
> URL: https://issues.apache.org/jira/browse/HDFS-4176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 2.0.2-alpha, 3.0.0-alpha1
>Reporter: Todd Lipcon
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-4176.00.patch, HDFS-4176.01.patch, 
> HDFS-4176.02.patch, namenode.jstack4
>
>
> When the EditLogTailer thread calls rollEdits() on the active NN via RPC, it 
> currently does so without a timeout. So, if the active NN has frozen (but not 
> actually crashed), this call can hang forever. This can then potentially 
> prevent the standby from becoming active.
> This may actually considered a side effect of HADOOP-6762 -- if the RPC were 
> interruptible, that would also fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10519) Add a configuration option to enable in-progress edit log tailing

2016-07-27 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396529#comment-15396529
 ] 

Andrew Wang commented on HDFS-10519:


Just some nits, I think this really is the last one:

* Some extra whitespace changes in RemoteEditLogManifest. Looks like there are 
a few other places we could undo the whitespace changes too.
* Rather than adding this new logString method, shall we update the test 
strings instead? This method doesn't seem useful except to avoid updating the 
tests.

> Add a configuration option to enable in-progress edit log tailing
> -
>
> Key: HDFS-10519
> URL: https://issues.apache.org/jira/browse/HDFS-10519
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha
>Reporter: Jiayi Zhou
>Assignee: Jiayi Zhou
>Priority: Minor
> Attachments: HDFS-10519.001.patch, HDFS-10519.002.patch, 
> HDFS-10519.003.patch, HDFS-10519.004.patch, HDFS-10519.005.patch, 
> HDFS-10519.006.patch, HDFS-10519.007.patch, HDFS-10519.008.patch, 
> HDFS-10519.009.patch, HDFS-10519.010.patch, HDFS-10519.011.patch, 
> HDFS-10519.012.patch
>
>
> Standby Namenode has the option to do in-progress edit log tailing to improve 
> the data freshness. In-progress tailing is already implemented, but it's not 
> enabled as default configuration. And there's no related configuration key to 
> turn it on.
> Adding a related configuration key to let Standby Namenode is reasonable and 
> would be a basis for further improvement on Standby Namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8986) Add option to -du to calculate directory space usage excluding snapshots

2016-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396522#comment-15396522
 ] 

Hadoop QA commented on HDFS-8986:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 26s{color} | {color:orange} root: The patch generated 1 new + 352 unchanged 
- 11 fixed = 353 total (was 363) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 18s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
58s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.cli.TestHDFSCLI |
|   | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820537/HDFS-8986.03.patch |
| JIRA Issue | HDFS-8986 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  cc  |
| uname | Linux 627a5e33c365 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC

[jira] [Commented] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396510#comment-15396510
 ] 

Hadoop QA commented on HDFS-4176:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 424 unchanged - 2 fixed = 424 total (was 426) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSShell |
|   | hadoop.cli.TestHDFSCLI |
|   | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820546/HDFS-4176.01.patch |
| JIRA Issue | HDFS-4176 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 8616328fb9d1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 54fe17a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16218/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16218/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16218/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> EditLogTailer should call rollEdits with a timeout
> -

[jira] [Commented] (HDFS-10641) TestBlockManager#testBlockReportQueueing fails intermittently

2016-07-27 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396506#comment-15396506
 ] 

Mingliang Liu commented on HDFS-10641:
--

Ping [~umamaheswararao] and [~kihwal] as they reviewed the [HDFS-9198].

> TestBlockManager#testBlockReportQueueing fails intermittently
> -
>
> Key: HDFS-10641
> URL: https://issues.apache.org/jira/browse/HDFS-10641
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode, test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10641.000.patch
>
>
> See example failing [stack 
> trace|https://builds.apache.org/job/PreCommit-HADOOP-Build/9996/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestBlockManager/testBlockReportQueueing/].
> h6. Stacktrace
> {code}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlockReportQueueing(TestBlockManager.java:1074)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10689) "hdfs dfs -chmod 777" does not remove sticky bit

2016-07-27 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396500#comment-15396500
 ] 

Chris Nauroth commented on HDFS-10689:
--

bq. Now the question is if we declare this a bug fix that can be backported to 
branch-2, or if this behavior change is too incompatible. Given that sticky 
bits are pretty rare in general, I think it's safe for branch-2, but would 
welcome other's thoughts. Anything to add Chris Nauroth?

[~andrew.wang], thanks for the notification.  I agree with the proposed change, 
but the compatibility aspects of changes like this are always tricky to 
consider.  In this case, the change is something that potentially weakens 
authorization.  If a user has some automation that runs chmod on a directory, 
and that user expects the current behavior that sticky bit is preserved, then 
the effect would be to start allowing users to delete files owned by someone 
else.  Admittedly, sticky bit usage is rare, typically only on /tmp, but I'd 
still be more comfortable with this as a 3.x change flagged 
backward-incompatible.

> "hdfs dfs -chmod 777" does not remove sticky bit
> 
>
> Key: HDFS-10689
> URL: https://issues.apache.org/jira/browse/HDFS-10689
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Attachments: HDFS-10689.001.patch
>
>
> When a directory permission is modified using hdfs dfs chmod command and when 
> octal/numeric format is used, the leading sticky bit is not fully honored.
> 1. Create a dir dir_test_with_sticky_bit
> 2. Apply sticky bit permission on the dir : hdfs dfs -chmod 1755 
> /dir_test_with_sticky_bit
> 3. Remove sticky bit permission on the dir: hdfs dfs -chmod 755 
> /dir_test_with_sticky_bit
> Expected: Remove the sticky bit on the dir, as it happens on Mac/Linux native 
> filesystem with native chmod.
> 4. However, removing sticky bit permission by explicitly turning off the bit 
> works. hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> {noformat}
> manoj@~/work/hadev-pp: hdfs dfs -chmod 1755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit  <=== sticky bit still intact
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8897) Balancer should handle fs.defaultFS with trailing slashes in HA

2016-07-27 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396495#comment-15396495
 ] 

Wei-Chiu Chuang commented on HDFS-8897:
---

bq. If we add the code to start the balancer with that namenodes, you will see 
Balancer: namenodes = [hdfs://sandbox/, hdfs://sandbox] without the fix.
I see. Please move that code to where balancer is not yet start.

bq. This case is tricky because {fs.defaultFS}} is valid but with a trailing 
slash, everything else works fine except Balancer, otherwise users would have 
detected the problem much earlier in other ways.
It looks to me that whenever the URI fs.defaultFS is used, the consumer of the 
URI takes either the scheme, the authority, the host or the port of the URI, 
but the path component is ignored, at least when the scheme is hdfs.

> Balancer should handle fs.defaultFS with trailing slashes in HA
> ---
>
> Key: HDFS-8897
> URL: https://issues.apache.org/jira/browse/HDFS-8897
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.7.1
> Environment: Centos 6.6
>Reporter: LINTE
>Assignee: John Zhuge
> Attachments: HDFS-8897.001.patch
>
>
> When balancer is launched, it should test if there is already a 
> /system/balancer.id file in HDFS.
> When the file doesn't exist, the balancer don't want to run : 
> 15/08/14 16:35:12 INFO balancer.Balancer: namenodes  = [hdfs://sandbox/, 
> hdfs://sandbox]
> 15/08/14 16:35:12 INFO balancer.Balancer: parameters = 
> Balancer.Parameters[BalancingPolicy.Node, threshold=10.0, max idle iteration 
> = 5, number of nodes to be excluded = 0, number of nodes to be included = 0]
> Time Stamp   Iteration#  Bytes Already Moved  Bytes Left To Move  
> Bytes Being Moved
> 15/08/14 16:35:14 INFO balancer.KeyManager: Block token params received from 
> NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec
> 15/08/14 16:35:14 INFO block.BlockTokenSecretManager: Setting block keys
> 15/08/14 16:35:14 INFO balancer.KeyManager: Update block keys every 2hrs, 
> 30mins, 0sec
> 15/08/14 16:35:14 INFO block.BlockTokenSecretManager: Setting block keys
> 15/08/14 16:35:14 INFO balancer.KeyManager: Block token params received from 
> NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec
> 15/08/14 16:35:14 INFO block.BlockTokenSecretManager: Setting block keys
> 15/08/14 16:35:14 INFO balancer.KeyManager: Update block keys every 2hrs, 
> 30mins, 0sec
> java.io.IOException: Another Balancer is running..  Exiting ...
> Aug 14, 2015 4:35:14 PM  Balancing took 2.408 seconds
> Looking at the audit log file when trying to run the balancer, the balancer 
> create the /system/balancer.id and then delete it on exiting ... 
> 2015-08-14 16:37:45,844 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:45,900 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=create  
> src=/system/balancer.id dst=nullperm=hdfs:hadoop:rw-r-  
> proto=rpc
> 2015-08-14 16:37:45,919 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:46,090 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:46,112 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=getfileinfo 
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> 2015-08-14 16:37:46,117 INFO FSNamesystem.audit: allowed=true   
> ugi=hdfs@SANDBOX.HADOOP (auth:KERBEROS) ip=/x.x.x.x   cmd=delete  
> src=/system/balancer.id dst=nullperm=null   proto=rpc
> The error seems to be located in 
> org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java 
> The function checkAndMarkRunning return null even if the /system/balancer.id 
> doesn't exist before entering this function; if it exists, then it is deleted 
> and the balancer exit with the same error.
> 
>   private OutputStream checkAndMarkRunning() throws IOException {
> try {
>   if (fs.exists(idPath)) {
> // try appending to it so that it will fail fast if another balancer 
> is
> // running.
> IOUtils.closeStream(fs.append(idPath));
> fs.delete(idPath, true);
>   }
>   final FSDataOutputStream fsout = fs.create(idPath, false);
>   // mark balancer idPath to be deleted dur

[jira] [Commented] (HDFS-10667) Report more accurate info about data corruption location

2016-07-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396493#comment-15396493
 ] 

Hudson commented on HDFS-10667:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #10164 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10164/])
HDFS-10667. Report more accurate info about data corruption location. (yzhang: 
rev eb7ff0c9927131f4a797148b970a95a1abf7d847)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java


> Report more accurate info about data corruption location
> 
>
> Key: HDFS-10667
> URL: https://issues.apache.org/jira/browse/HDFS-10667
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
>Assignee: Yuanbo Liu
> Attachments: HDFS-10667.001.patch, HDFS-10667.002.patch, 
> HDFS-10667.003.patch, HDFS-10667.004.patch, HDFS-10667.005.patch
>
>
> Per 
> https://issues.apache.org/jira/browse/HDFS-10587?focusedCommentId=15376897&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15376897
> 129.77 report:
> {code}
> 2016-07-13 11:49:01,512 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Receiving blk_1116167880_42906656 src: /10.6.134.229:43844 dest: 
> /10.6.129.77:5080
> 2016-07-13 11:49:01,543 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Checksum error in block blk_1116167880_42906656 from /10.6.134.229:43844
> org.apache.hadoop.fs.ChecksumException: Checksum error: 
> DFSClient_NONMAPREDUCE_2019484565_1 at 81920 exp: 1352119728 got: -1012279895
> at 
> org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(Native 
> Method)
> at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSumsByteArray(NativeCrc32.java:69)
> at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:347)
> at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:294)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.verifyChunks(BlockReceiver.java:421)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:558)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:789)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:917)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:174)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:80)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:244)
> at java.lang.Thread.run(Thread.java:745)
> 2016-07-13 11:49:01,543 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Exception for blk_1116167880_42906656
> java.io.IOException: Terminating due to a checksum error.java.io.IOException: 
> Unexpected checksum mismatch while writing blk_1116167880_42906656 from 
> /10.6.134.229:43844
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:571)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:789)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:917)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:174)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:80)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:244)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> and
> https://issues.apache.org/jira/browse/HDFS-10587?focusedCommentId=15378879&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15378879
> {quote}
> While verifying only packet, the position mentioned in the checksum 
> exception, is relative to packet buffer offset, not the block offset. So 
> 81920 is the offset in the exception.
> {quote}
> Create this jira to report more accurate corruption location information: the 
> offset in the file, offset in block, and offset in packet.
> See 
> https://issues.apache.org/jira/browse/HDFS-10587?focusedCommentId=15387083&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15387083



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10441) libhdfs++: HA namenode support

2016-07-27 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396472#comment-15396472
 ] 

James Clampffer commented on HDFS-10441:


Thanks for looking at this more Bob.

bq. Should HandleRpcResponse return a value or just forward-propagate to 
CommsError? In the implementation, we alternately do one, the other, or both.
Probably best write functions as plain procedural code that return values 
wherever possible IMO.  That way the async work and associated complexity are 
clustered in the same areas rather than distributed across the code.  It might 
be worth splitting off a seperate jira to refactor and document the RPC stuff; 
the current mix of state machine(s) and continuations was a little tricky to 
reverse engineer for adding this feature.  As you pointed out I'm doing both 
returns and forwarding here.  The current code is fairly well tested/used and 
I'd really like to get a CI test around this before changing it too much more.  
Think that's worth a seperate jira?

bq. RpcConnectionImpl::ConnectAndFlush logs entry at the INFO level. I think we 
already log once when we're attempting to connect, do we not? If this is the 
only place we keep it, we should include the endpoint we're connecting to in 
the log message.
Yea, RpcConnectionImpl::ConnectAndFlush and RpcConnectionImpl::Connect are both 
logging effectively the same thing at the moment.  Logging both helps a little 
bit to reason about the indirect recursion via async callbacks going on in that 
code.  I think it'd be best to wait until the connection callback to log the 
endpoint in case there are multiple endpoints, but that starts getting a bit 
out of scope for this.  My general strategy is to follow "perfect is the enemy 
of good" when it comes to this sort of stuff, particularly early in the 
lifetime of a feature.

> libhdfs++: HA namenode support
> --
>
> Key: HDFS-10441
> URL: https://issues.apache.org/jira/browse/HDFS-10441
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10441.HDFS-8707.000.patch, 
> HDFS-10441.HDFS-8707.002.patch, HDFS-10441.HDFS-8707.003.patch, 
> HDFS-10441.HDFS-8707.004.patch, HDFS-10441.HDFS-8707.005.patch, 
> HDFS-10441.HDFS-8707.006.patch, HDFS-10441.HDFS-8707.007.patch, 
> HDFS-10441.HDFS-8707.008.patch, HDFS-10441.HDFS-8707.009.patch, 
> HDFS-10441.HDFS-8707.010.patch, HDFS-10441.HDFS-8707.011.patch, 
> HDFS-10441.HDFS-8707.012.patch, HDFS-10441.HDFS-8707.013.patch, 
> HDFS-10441.HDFS-8707.014.patch, HDFS-8707.HDFS-10441.001.patch
>
>
> If a cluster is HA enabled then do proper failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-27 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396425#comment-15396425
 ] 

Jing Zhao commented on HDFS-4176:
-

Thanks for working on this, [~eddyxu]! The patch looks good to me overall. Some 
minor comments:
# we should make sure the thread in the executor is daemon thread.
{code}
327 ExecutorService executor = Executors.newSingleThreadExecutor();
{code}
# We can try to reuse the ExecutorService, or for the current code we should 
call {{shutdown}} in the finally block.
# It's better to separate the exceptions in different catch sections, since 
they have different handling logic.
{code}
333 } catch (InterruptedException | ExecutionException | 
TimeoutException e) {
{code}

> EditLogTailer should call rollEdits with a timeout
> --
>
> Key: HDFS-4176
> URL: https://issues.apache.org/jira/browse/HDFS-4176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 2.0.2-alpha, 3.0.0-alpha1
>Reporter: Todd Lipcon
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-4176.00.patch, HDFS-4176.01.patch, namenode.jstack4
>
>
> When the EditLogTailer thread calls rollEdits() on the active NN via RPC, it 
> currently does so without a timeout. So, if the active NN has frozen (but not 
> actually crashed), this call can hang forever. This can then potentially 
> prevent the standby from becoming active.
> This may actually considered a side effect of HADOOP-6762 -- if the RPC were 
> interruptible, that would also fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10609) Uncaught InvalidEncryptionKeyException during pipeline recovery may abort downstream applications

2016-07-27 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396412#comment-15396412
 ] 

Wei-Chiu Chuang commented on HDFS-10609:


The test failure in TestDFSCli is being tracked by HDFS-10667. TestHdfsAdmin 
passed in my local tree.

Other than the test failure and the checkstyle issue, would any watcher like to 
review the v2 patch? Thanks!

> Uncaught InvalidEncryptionKeyException during pipeline recovery may abort 
> downstream applications
> -
>
> Key: HDFS-10609
> URL: https://issues.apache.org/jira/browse/HDFS-10609
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.0
> Environment: CDH5.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10609.001.patch, HDFS-10609.002.patch
>
>
> In normal operations, if SASL negotiation fails due to 
> {{InvalidEncryptionKeyException}}, it is typically a benign exception, which 
> is caught and retried :
> {code:title=SaslDataTransferServer#doSaslHandshake}
>   if (ioe instanceof SaslException &&
>   ioe.getCause() != null &&
>   ioe.getCause() instanceof InvalidEncryptionKeyException) {
> // This could just be because the client is long-lived and hasn't gotten
> // a new encryption key from the NN in a while. Upon receiving this
> // error, the client will get a new encryption key from the NN and retry
> // connecting to this DN.
> sendInvalidKeySaslErrorMessage(out, ioe.getCause().getMessage());
>   } 
> {code}
> {code:title=DFSOutputStream.DataStreamer#createBlockOutputStream}
> if (ie instanceof InvalidEncryptionKeyException && refetchEncryptionKey > 0) {
> DFSClient.LOG.info("Will fetch a new encryption key and retry, " 
> + "encryption key was invalid when connecting to "
> + nodes[0] + " : " + ie);
> {code}
> However, if the exception is thrown during pipeline recovery, the 
> corresponding code does not handle it properly, and the exception is spilled 
> out to downstream applications, such as SOLR, aborting its operation:
> {quote}
> 2016-07-06 12:12:51,992 ERROR org.apache.solr.update.HdfsTransactionLog: 
> Exception closing tlog.
> org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException: 
> Can't re-compute encryption key for nonce, since the required block key 
> (keyID=557709482) doesn't exist. Current key: 1350592619
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessageAndNegotiatedCipherOption(DataTransferSaslUtil.java:417)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:474)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:299)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:242)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:211)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:183)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.transfer(DFSOutputStream.java:1308)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1272)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1433)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1147)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:632)
> 2016-07-06 12:12:51,997 ERROR org.apache.solr.update.CommitTracker: auto 
> commit error...:org.apache.solr.common.SolrException: 
> org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException: 
> Can't re-compute encryption key for nonce, since the required block key 
> (keyID=557709482) doesn't exist. Current key: 1350592619
> at 
> org.apache.solr.update.HdfsTransactionLog.close(HdfsTransactionLog.java:316)
> at 
> org.apache.solr.update.TransactionLog.decref(TransactionLog.java:505)
> at org.apache.solr.update.UpdateLog.addOldLog(UpdateLog.java:380)
> at org.apache.solr.update.UpdateLog.postCommit(UpdateLog.java:676)
> at 
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:623)
> at org.apache.solr.update.CommitTracker.run(CommitTracker.java:216)
> at 
> java.util.concurrent.Ex

[jira] [Comment Edited] (HDFS-8983) NameNode support for protected directories

2016-07-27 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396378#comment-15396378
 ] 

Arpit Agarwal edited comment on HDFS-8983 at 7/27/16 9:09 PM:
--

Just tested it. {{delete("/", true)}} will throw AccessControlException if
- "/" is included in {{fs.protected.directories}}; and
- it is non-empty.

Else it returns false as before. The list of protected directories is empty by 
default.

{code}
In contrast, HDFS never permits the deletion of the root of a filesystem; the
filesystem can be taken offline and reformatted if an empty
filesystem is desired.

if isDir(FS, p) and isRoot(p) and recursive :
FS' = FS
result = False
{code}

I see what you mean about the contract. We could fix HDFS so it always returns 
false regardless of the protected dirs settings. It will take some refactoring 
and I'd also want to understand why the root directory check wasn't done 
earlier.


was (Author: arpitagarwal):
Just tested it. {{delete("/", true)}} will throw AccessControlException if
- "/" is included in {{fs.protected.directories}}; and
- it is non-empty.

Else it returns false as before. The list of protected directories is empty by 
default.

{code}
In contrast, HDFS never permits the deletion of the root of a filesystem; the
filesystem can be taken offline and reformatted if an empty
filesystem is desired.

if isDir(FS, p) and isRoot(p) and recursive :
FS' = FS
result = False
{code}

I see what you mean about the contract. We could fix it so it always returns 
true regardless of the protected dirs settings. It will take some refactoring 
and I'd also want to understand why the root directory check wasn't done 
earlier.

> NameNode support for protected directories
> --
>
> Key: HDFS-8983
> URL: https://issues.apache.org/jira/browse/HDFS-8983
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HDFS-8393.01.patch, HDFS-8393.02.patch, 
> HDFS-8983.03.patch, HDFS-8983.04.patch
>
>
> To protect important system directories from inadvertent deletion (e.g. 
> /Users) the NameNode can allow marking directories as _protected_. Such 
> directories cannot be deleted unless they are empty. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8983) NameNode support for protected directories

2016-07-27 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396378#comment-15396378
 ] 

Arpit Agarwal commented on HDFS-8983:
-

Just tested it. {{delete("/", true)}} will throw AccessControlException if
- "/" is included in {{fs.protected.directories}}; and
- it is non-empty.

Else it returns false as before. The list of protected directories is empty by 
default.

{code}
In contrast, HDFS never permits the deletion of the root of a filesystem; the
filesystem can be taken offline and reformatted if an empty
filesystem is desired.

if isDir(FS, p) and isRoot(p) and recursive :
FS' = FS
result = False
{code}

I see what you mean about the contract. We could fix it so it always returns 
true regardless of the protected dirs settings. It will take some refactoring 
and I'd also want to understand why the root directory check wasn't done 
earlier.

> NameNode support for protected directories
> --
>
> Key: HDFS-8983
> URL: https://issues.apache.org/jira/browse/HDFS-8983
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.7.1
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: 2.8.0
>
> Attachments: HDFS-8393.01.patch, HDFS-8393.02.patch, 
> HDFS-8983.03.patch, HDFS-8983.04.patch
>
>
> To protect important system directories from inadvertent deletion (e.g. 
> /Users) the NameNode can allow marking directories as _protected_. Such 
> directories cannot be deleted unless they are empty. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10225) DataNode hot swap drives should disallow storage type changes.

2016-07-27 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396373#comment-15396373
 ] 

Xiao Chen commented on HDFS-10225:
--

I forgot to mention, failed tests looked unrelated.

> DataNode hot swap drives should disallow storage type changes. 
> ---
>
> Key: HDFS-10225
> URL: https://issues.apache.org/jira/browse/HDFS-10225
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10225.000.patch, HDFS-10225.001.patch, 
> HDFS-10225.002.patch, HDFS-10225.003.patch
>
>
> The current hot swap code only differentiate data dirs by their paths. People 
> might want to change the types of certain data dirs from the default value in 
> an existing cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-27 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-4176:

Attachment: HDFS-4176.01.patch

Fix check style errors and test failures.

> EditLogTailer should call rollEdits with a timeout
> --
>
> Key: HDFS-4176
> URL: https://issues.apache.org/jira/browse/HDFS-4176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode
>Affects Versions: 2.0.2-alpha, 3.0.0-alpha1
>Reporter: Todd Lipcon
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-4176.00.patch, HDFS-4176.01.patch, namenode.jstack4
>
>
> When the EditLogTailer thread calls rollEdits() on the active NN via RPC, it 
> currently does so without a timeout. So, if the active NN has frozen (but not 
> actually crashed), this call can hang forever. This can then potentially 
> prevent the standby from becoming active.
> This may actually considered a side effect of HADOOP-6762 -- if the RPC were 
> interruptible, that would also fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9937) Update dfsadmin command line help and HdfsQuotaAdminGuide

2016-07-27 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396356#comment-15396356
 ] 

Wei-Chiu Chuang commented on HDFS-9937:
---

My bad. [~ajisakaa] thanks for catching the test failure and reporting the 
failure in the new jira. Let's review the patch at HDFS-10696.

> Update dfsadmin command line help and HdfsQuotaAdminGuide
> -
>
> Key: HDFS-9937
> URL: https://issues.apache.org/jira/browse/HDFS-9937
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: commandline, supportability
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-9937.01.patch, HDFS-9937.02.patch, 
> HDFS-9937.03.patch, HDFS-9937.04.patch, HDFS-9937.05.patch, HDFS-9937.06.patch
>
>
> dfsadmin command line top-level help menu is not consistent with detailed 
> help menu.
> * -safemode missed options -wait and -forceExit 
> * -restoreFailedStorage options are not described consistently 
> (true/false/check, or Set/Unset/Check?)
> * -setSpaceQuota optionally takes a -storageType parameter, but it's not 
> clear what are the available options. (Seems to be (SSD, DISK, ARCHIVE), from 
> HdfsQuotaAdminGuide.html)
> * -reconfig seems to also take namenode as parameter



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10698) Test org.apache.hadoop.cli.TestHDFSCLI fails in trunk

2016-07-27 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396352#comment-15396352
 ] 

Yongjun Zhang commented on HDFS-10698:
--

Thanks [~jojochuang]!


> Test org.apache.hadoop.cli.TestHDFSCLI fails in trunk
> -
>
> Key: HDFS-10698
> URL: https://issues.apache.org/jira/browse/HDFS-10698
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Yongjun Zhang
>
> {code}
> Running org.apache.hadoop.cli.TestHDFSCLI
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 39.887 sec 
> <<< FAILURE! - in org.apache.hadoop.cli.TestHDFSCLI
> testAll(org.apache.hadoop.cli.TestHDFSCLI)  Time elapsed: 39.697 sec  <<< 
> FAILURE!
> java.lang.AssertionError: One of the tests failed. See the Detailed results 
> to identify the command that failed
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at 
> org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
> at 
> org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
> at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:87)
> Results :
> Failed tests:
>   
> TestHDFSCLI.tearDown:87->CLITestHelper.tearDown:125->CLITestHelper.displayResults:263
>  One of the tests failed. See the Detailed results to identify the command 
> that failed
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10698) Test org.apache.hadoop.cli.TestHDFSCLI fails in trunk

2016-07-27 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-10698.

Resolution: Duplicate

thx Youngjun for filing the jira.
This is a dup of HDFS-10696 where a patch is provided.

> Test org.apache.hadoop.cli.TestHDFSCLI fails in trunk
> -
>
> Key: HDFS-10698
> URL: https://issues.apache.org/jira/browse/HDFS-10698
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Yongjun Zhang
>
> {code}
> Running org.apache.hadoop.cli.TestHDFSCLI
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 39.887 sec 
> <<< FAILURE! - in org.apache.hadoop.cli.TestHDFSCLI
> testAll(org.apache.hadoop.cli.TestHDFSCLI)  Time elapsed: 39.697 sec  <<< 
> FAILURE!
> java.lang.AssertionError: One of the tests failed. See the Detailed results 
> to identify the command that failed
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at 
> org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
> at 
> org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
> at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:87)
> Results :
> Failed tests:
>   
> TestHDFSCLI.tearDown:87->CLITestHelper.tearDown:125->CLITestHelper.displayResults:263
>  One of the tests failed. See the Detailed results to identify the command 
> that failed
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8986) Add option to -du to calculate directory space usage excluding snapshots

2016-07-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-8986:

Attachment: (was: HDFS-8986.03.patch)

> Add option to -du to calculate directory space usage excluding snapshots
> 
>
> Key: HDFS-8986
> URL: https://issues.apache.org/jira/browse/HDFS-8986
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Gautam Gopalakrishnan
>Assignee: Xiao Chen
> Attachments: HDFS-8986.01.patch, HDFS-8986.02.patch, 
> HDFS-8986.03.patch
>
>
> When running {{hadoop fs -du}} on a snapshotted directory (or one of its 
> children), the report includes space consumed by blocks that are only present 
> in the snapshots. This is confusing for end users.
> {noformat}
> $  hadoop fs -du -h -s /tmp/parent /tmp/parent/*
> 799.7 M  2.3 G  /tmp/parent
> 799.7 M  2.3 G  /tmp/parent/sub1
> $ hdfs dfs -createSnapshot /tmp/parent snap1
> Created snapshot /tmp/parent/.snapshot/snap1
> $ hadoop fs -rm -skipTrash /tmp/parent/sub1/*
> ...
> $ hadoop fs -du -h -s /tmp/parent /tmp/parent/*
> 799.7 M  2.3 G  /tmp/parent
> 799.7 M  2.3 G  /tmp/parent/sub1
> $ hdfs dfs -deleteSnapshot /tmp/parent snap1
> $ hadoop fs -du -h -s /tmp/parent /tmp/parent/*
> 0  0  /tmp/parent
> 0  0  /tmp/parent/sub1
> {noformat}
> It would be helpful if we had a flag, say -X, to exclude any snapshot related 
> disk usage in the output



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8986) Add option to -du to calculate directory space usage excluding snapshots

2016-07-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-8986:

Attachment: HDFS-8986.03.patch

Reattaching patch 3 to also add the {{[-x]}} to count's FileSystemShell doc.

> Add option to -du to calculate directory space usage excluding snapshots
> 
>
> Key: HDFS-8986
> URL: https://issues.apache.org/jira/browse/HDFS-8986
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Gautam Gopalakrishnan
>Assignee: Xiao Chen
> Attachments: HDFS-8986.01.patch, HDFS-8986.02.patch, 
> HDFS-8986.03.patch
>
>
> When running {{hadoop fs -du}} on a snapshotted directory (or one of its 
> children), the report includes space consumed by blocks that are only present 
> in the snapshots. This is confusing for end users.
> {noformat}
> $  hadoop fs -du -h -s /tmp/parent /tmp/parent/*
> 799.7 M  2.3 G  /tmp/parent
> 799.7 M  2.3 G  /tmp/parent/sub1
> $ hdfs dfs -createSnapshot /tmp/parent snap1
> Created snapshot /tmp/parent/.snapshot/snap1
> $ hadoop fs -rm -skipTrash /tmp/parent/sub1/*
> ...
> $ hadoop fs -du -h -s /tmp/parent /tmp/parent/*
> 799.7 M  2.3 G  /tmp/parent
> 799.7 M  2.3 G  /tmp/parent/sub1
> $ hdfs dfs -deleteSnapshot /tmp/parent snap1
> $ hadoop fs -du -h -s /tmp/parent /tmp/parent/*
> 0  0  /tmp/parent
> 0  0  /tmp/parent/sub1
> {noformat}
> It would be helpful if we had a flag, say -X, to exclude any snapshot related 
> disk usage in the output



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10667) Report more accurate info about data corruption location

2016-07-27 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396239#comment-15396239
 ] 

Yongjun Zhang commented on HDFS-10667:
--

Created HDFS-10698 for the failed testcase. +1 on rev 5 and will commit today.


> Report more accurate info about data corruption location
> 
>
> Key: HDFS-10667
> URL: https://issues.apache.org/jira/browse/HDFS-10667
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
>Assignee: Yuanbo Liu
> Attachments: HDFS-10667.001.patch, HDFS-10667.002.patch, 
> HDFS-10667.003.patch, HDFS-10667.004.patch, HDFS-10667.005.patch
>
>
> Per 
> https://issues.apache.org/jira/browse/HDFS-10587?focusedCommentId=15376897&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15376897
> 129.77 report:
> {code}
> 2016-07-13 11:49:01,512 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Receiving blk_1116167880_42906656 src: /10.6.134.229:43844 dest: 
> /10.6.129.77:5080
> 2016-07-13 11:49:01,543 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Checksum error in block blk_1116167880_42906656 from /10.6.134.229:43844
> org.apache.hadoop.fs.ChecksumException: Checksum error: 
> DFSClient_NONMAPREDUCE_2019484565_1 at 81920 exp: 1352119728 got: -1012279895
> at 
> org.apache.hadoop.util.NativeCrc32.nativeComputeChunkedSumsByteArray(Native 
> Method)
> at 
> org.apache.hadoop.util.NativeCrc32.verifyChunkedSumsByteArray(NativeCrc32.java:69)
> at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:347)
> at 
> org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:294)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.verifyChunks(BlockReceiver.java:421)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:558)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:789)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:917)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:174)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:80)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:244)
> at java.lang.Thread.run(Thread.java:745)
> 2016-07-13 11:49:01,543 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Exception for blk_1116167880_42906656
> java.io.IOException: Terminating due to a checksum error.java.io.IOException: 
> Unexpected checksum mismatch while writing blk_1116167880_42906656 from 
> /10.6.134.229:43844
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:571)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:789)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:917)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:174)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:80)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:244)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> and
> https://issues.apache.org/jira/browse/HDFS-10587?focusedCommentId=15378879&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15378879
> {quote}
> While verifying only packet, the position mentioned in the checksum 
> exception, is relative to packet buffer offset, not the block offset. So 
> 81920 is the offset in the exception.
> {quote}
> Create this jira to report more accurate corruption location information: the 
> offset in the file, offset in block, and offset in packet.
> See 
> https://issues.apache.org/jira/browse/HDFS-10587?focusedCommentId=15387083&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15387083



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10698) Test org.apache.hadoop.cli.TestHDFSCLI fails in trunk

2016-07-27 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-10698:


 Summary: Test org.apache.hadoop.cli.TestHDFSCLI fails in trunk
 Key: HDFS-10698
 URL: https://issues.apache.org/jira/browse/HDFS-10698
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Reporter: Yongjun Zhang


{code}
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 39.887 sec <<< 
FAILURE! - in org.apache.hadoop.cli.TestHDFSCLI
testAll(org.apache.hadoop.cli.TestHDFSCLI)  Time elapsed: 39.697 sec  <<< 
FAILURE!
java.lang.AssertionError: One of the tests failed. See the Detailed results to 
identify the command that failed
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:87)


Results :

Failed tests:
  
TestHDFSCLI.tearDown:87->CLITestHelper.tearDown:125->CLITestHelper.displayResults:263
 One of the tests failed. See the Detailed results to identify the command that 
failed

Tests run: 1, Failures: 1, Errors: 0, Skipped: 0

{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8986) Add option to -du to calculate directory space usage excluding snapshots

2016-07-27 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-8986:

Attachment: HDFS-8986.03.patch

> Add option to -du to calculate directory space usage excluding snapshots
> 
>
> Key: HDFS-8986
> URL: https://issues.apache.org/jira/browse/HDFS-8986
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Gautam Gopalakrishnan
>Assignee: Xiao Chen
> Attachments: HDFS-8986.01.patch, HDFS-8986.02.patch, 
> HDFS-8986.03.patch
>
>
> When running {{hadoop fs -du}} on a snapshotted directory (or one of its 
> children), the report includes space consumed by blocks that are only present 
> in the snapshots. This is confusing for end users.
> {noformat}
> $  hadoop fs -du -h -s /tmp/parent /tmp/parent/*
> 799.7 M  2.3 G  /tmp/parent
> 799.7 M  2.3 G  /tmp/parent/sub1
> $ hdfs dfs -createSnapshot /tmp/parent snap1
> Created snapshot /tmp/parent/.snapshot/snap1
> $ hadoop fs -rm -skipTrash /tmp/parent/sub1/*
> ...
> $ hadoop fs -du -h -s /tmp/parent /tmp/parent/*
> 799.7 M  2.3 G  /tmp/parent
> 799.7 M  2.3 G  /tmp/parent/sub1
> $ hdfs dfs -deleteSnapshot /tmp/parent snap1
> $ hadoop fs -du -h -s /tmp/parent /tmp/parent/*
> 0  0  /tmp/parent
> 0  0  /tmp/parent/sub1
> {noformat}
> It would be helpful if we had a flag, say -X, to exclude any snapshot related 
> disk usage in the output



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8986) Add option to -du to calculate directory space usage excluding snapshots

2016-07-27 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396230#comment-15396230
 ] 

Xiao Chen commented on HDFS-8986:
-

Thanks a lot [~jojochuang] for the review! New patch attached with comments 
inline:
bq. in ContentSummary.java, the name of setter method for snapshotLength, 
snapshotFileCount, snapshotDirectoryCount and snapshotSpaceConsumed should be 
prefixed by "set". E.g. setSnapshotLength
I agree setXXX is a better setter name. The reason in these names here is for 
consistency with existing setter method naming. It's a public (though evolving) 
API, so I'd want to keep the change minimal.
bq. in ContentSummary#equals(), you may declare a ContentSummary object and 
typecast the to object to it, so as to avoid explicitly typecasting every 
method call. This is just a personal taste, not big deal though.
Good idea, updated.
bq. Please update FileSystemShell.md to include the -x option for the usage of 
du.
Good catch! Updated.
bq. I don't understand this code in INodeDirectory, and I wonder if it has a 
bug. If I understand it correctly, the counts field and snapshotCounts field of 
summary object will be exactly the same. On the contrary, I think you may have 
to declare another method similar to 
DirectoryWithSnapshotFeature.computeContentSummary4Snapshot, but which computes 
content for snapshottable subdirectories and files only.
I think current patch is correct. It's a bit difficult to read through, since 
(the great change) of HDFS-4995. But the high level idea is that, 
{{ContentCounts}} is aggregated calculation. You're right in that the 
calculation in {{INodeDirectory#computeContentSummary}} would aggregate same 
values into {{counts}} and {{snapshotCounts}}, but that's what we want. This 
way, in the final calculation in {{FsUsage$Du#processPath}} we can exclude the 
snapshot portion from the calculation by (All - snapshotAll). 
I added 1 more step in the test to create a file as well, after snapshot taken. 
Makes sense?

> Add option to -du to calculate directory space usage excluding snapshots
> 
>
> Key: HDFS-8986
> URL: https://issues.apache.org/jira/browse/HDFS-8986
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Gautam Gopalakrishnan
>Assignee: Xiao Chen
> Attachments: HDFS-8986.01.patch, HDFS-8986.02.patch
>
>
> When running {{hadoop fs -du}} on a snapshotted directory (or one of its 
> children), the report includes space consumed by blocks that are only present 
> in the snapshots. This is confusing for end users.
> {noformat}
> $  hadoop fs -du -h -s /tmp/parent /tmp/parent/*
> 799.7 M  2.3 G  /tmp/parent
> 799.7 M  2.3 G  /tmp/parent/sub1
> $ hdfs dfs -createSnapshot /tmp/parent snap1
> Created snapshot /tmp/parent/.snapshot/snap1
> $ hadoop fs -rm -skipTrash /tmp/parent/sub1/*
> ...
> $ hadoop fs -du -h -s /tmp/parent /tmp/parent/*
> 799.7 M  2.3 G  /tmp/parent
> 799.7 M  2.3 G  /tmp/parent/sub1
> $ hdfs dfs -deleteSnapshot /tmp/parent snap1
> $ hadoop fs -du -h -s /tmp/parent /tmp/parent/*
> 0  0  /tmp/parent
> 0  0  /tmp/parent/sub1
> {noformat}
> It would be helpful if we had a flag, say -X, to exclude any snapshot related 
> disk usage in the output



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10609) Uncaught InvalidEncryptionKeyException during pipeline recovery may abort downstream applications

2016-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396215#comment-15396215
 ] 

Hadoop QA commented on HDFS-10609:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
94 unchanged - 1 fixed = 95 total (was 95) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
58s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.cli.TestHDFSCLI |
| Timed out junit tests | org.apache.hadoop.hdfs.TestHdfsAdmin |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820512/HDFS-10609.002.patch |
| JIRA Issue | HDFS-10609 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 83e587d1cc0e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 54fe17a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16214/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16214/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16

[jira] [Commented] (HDFS-10676) Add namenode metric to measure time spent in generating EDEKs

2016-07-27 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396198#comment-15396198
 ] 

Xiaoyu Yao commented on HDFS-10676:
---

Thanks [~hanishakoneru] for reporting the issue/posting the patch and 
[~arpitagarwal] for the reviews. 
The code change in patch v03 looks good to me. Just a few comments on the unit 
test:

1. NIT: clusterTestGenerateEDEKTime is too verbose, just cluster is good as it 
is the only cluster used in the test case.

2. Test cluster may not shutdown when exception happened during the test. This 
could cause subsequent test failures. You may wrap it with try/final or even 
better with the new Java try with resources semantics to ensure a test cluster 
clean up.

3. NIT: There are a few test wrapper (e.g., DFSTestUtil#createFile ) you can 
use to simplify test file creation below. 
{code} 
clusterTestGenerateEDEKTime.getNameNodeRpc().create
{code}

4. checkstyle issues.

> Add namenode metric to measure time spent in generating EDEKs
> -
>
> Key: HDFS-10676
> URL: https://issues.apache.org/jira/browse/HDFS-10676
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>  Labels: metrics, namenode
> Attachments: HDFS-10676.000.patch, HDFS-10676.001.patch, 
> HDFS-10676.002.patch, HDFS-10676.003.patch
>
>
> A metric to measure the time spent by Namenode in interacting with Key 
> Management System (KMS).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10625) VolumeScanner to report why a block is found bad

2016-07-27 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396168#comment-15396168
 ] 

Yongjun Zhang commented on HDFS-10625:
--

Hi [~linyiqun] and [~shahrs87],

Sorry for the delay. I took a further look, and think it's good to include the 
HDFS-10626 fix here, and mark HDFS-10626 as a duplicate. I'd like to include 
both of you as contributers for this jira.

I looked at the latest patch here, it looks to me that the best place to fix is 
in BlockSender
{code}
  long sendBlock(DataOutputStream out, OutputStream baseStream, 
 DataTransferThrottler throttler) throws IOException {
final TraceScope scope = datanode.getTracer().
newScope("sendBlock_" + block.getBlockId());
try {
  return doSendBlock(out, baseStream, throttler);
} finally {
  scope.close();
}
  }
{code}

We can add a catch block here to catch the IOException thrown, then include the 
replica information and throw a new IO exception, e.g:
{code}
try {
  return doSendBlock(out, baseStream, throttler);
} catch (IOException ie) {
  // throw new IOE here with replica info
  throw new IOException(replicaInfoStr, ie);
} finally {
  scope.close();
}
{code}

There is a snippet in the constructor to get the replica info:
{code}
 final Replica replica;
  final long replicaVisibleLength;
  synchronized(datanode.data) { 
replica = getReplica(block, datanode);
replicaVisibleLength = replica.getVisibleLength();
  }
{code}
Looks like we can make this replica a member of BlockSender instead of a local 
variable here, so that we can refer to it when needed, such as for this jira. 
We probably should make {{replicaVisibleLength}} a member and report it as part 
of the replica info too, since when the writing is going on, this value may be 
changing concurrently. Hi [~vinayrpet], what do you think about this 
suggestion? 

Thanks.


>  VolumeScanner to report why a block is found bad
> -
>
> Key: HDFS-10625
> URL: https://issues.apache.org/jira/browse/HDFS-10625
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs
>Reporter: Yongjun Zhang
>Assignee: Rushabh S Shah
>  Labels: supportability
> Attachments: HDFS-10625-1.patch, HDFS-10625.patch
>
>
> VolumeScanner may report:
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Reporting bad 
> blk_1170125248_96458336 on /d/dfs/dn
> {code}
> It would be helpful to report the reason why the block is bad, especially 
> when the block is corrupt, where is the first corrupted chunk in the block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396164#comment-15396164
 ] 

Hadoop QA commented on HDFS-4176:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 424 unchanged - 2 fixed = 428 total (was 426) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.cli.TestHDFSCLI |
|   | hadoop.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820507/HDFS-4176.00.patch |
| JIRA Issue | HDFS-4176 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4439d1a7da37 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 54fe17a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16213/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16213/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16213/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16213/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> EditLogTailer should call rollEdits with a timeout
> --
>
> Key: HDFS-4

[jira] [Commented] (HDFS-10681) DiskBalancer: query command should report Plan file path apart from PlanID

2016-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396157#comment-15396157
 ] 

Hadoop QA commented on HDFS-10681:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
6s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} hadoop-hdfs-project: The patch generated 8 new + 
201 unchanged - 1 fixed = 209 total (was 202) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
12s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Format string should use %n rather than n in 
org.apache.hadoop.hdfs.server.diskbalancer.command.QueryCommand.execute(CommandLine)
  At QueryCommand.java:rather than n in 
org.apache.hadoop.hdfs.server.diskbalancer.command.QueryCommand.execute(CommandLine)
  At QueryCommand.java:[line 77] |
| Failed junit tests | hadoop.cli.TestHDFSCLI |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820504/HDFS-10681.001.patch |
| JIRA Issue | HDFS-10681 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux f2979392f925 3.13.0-36-lowlatency #63-Ubunt

[jira] [Commented] (HDFS-4176) EditLogTailer should call rollEdits with a timeout

2016-07-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396136#comment-15396136
 ] 

Hadoop QA commented on HDFS-4176:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 4 new + 424 unchanged - 2 fixed = 428 total (was 426) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.cli.TestHDFSCLI |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820507/HDFS-4176.00.patch |
| JIRA Issue | HDFS-4176 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3abda960bb48 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 54fe17a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16212/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16212/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16212/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16212/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> EditLogTailer sho

[jira] [Commented] (HDFS-10689) "hdfs dfs -chmod 777" does not remove sticky bit

2016-07-27 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396129#comment-15396129
 ] 

Lei (Eddy) Xu commented on HDFS-10689:
--

Thanks for providing the patch, [~manojg]

* The changes on {{FsShellPermissions.java}}, {{UmaskParser.java}} and 
{{ChmodParser}} are not relevant? Lets remove them from the patch.
* {{PermissionParser.java#73}}, is it a format only change?
* In {{PermissionParser#applyOctalPattern}}:
{code}
stickyBitType = userType = groupType = othersType = '=';
{code}
Should we only set {{stickyBitType='='}}?
* {{TestStickyBit.java}}. Please clean up imports. 
{code}
// Tear down the test directories
402 hdfs.delete(sbExplicitTestDir, true);
403 hdfs.delete(sbOmittedTestDir, true);
404 assertFalse(hdfs.exists(sbExplicitTestDir));
405 assertFalse(hdfs.exists(sbOmittedTestDir));
{code}
You dont need to tear down these directories. Also you might not be necessary 
to test the sticky pit persistent by restart the cluster, as this is covered in 
the other test cases.

The rest looks good. Thanks


> "hdfs dfs -chmod 777" does not remove sticky bit
> 
>
> Key: HDFS-10689
> URL: https://issues.apache.org/jira/browse/HDFS-10689
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Attachments: HDFS-10689.001.patch
>
>
> When a directory permission is modified using hdfs dfs chmod command and when 
> octal/numeric format is used, the leading sticky bit is not fully honored.
> 1. Create a dir dir_test_with_sticky_bit
> 2. Apply sticky bit permission on the dir : hdfs dfs -chmod 1755 
> /dir_test_with_sticky_bit
> 3. Remove sticky bit permission on the dir: hdfs dfs -chmod 755 
> /dir_test_with_sticky_bit
> Expected: Remove the sticky bit on the dir, as it happens on Mac/Linux native 
> filesystem with native chmod.
> 4. However, removing sticky bit permission by explicitly turning off the bit 
> works. hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> {noformat}
> manoj@~/work/hadev-pp: hdfs dfs -chmod 1755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit  <=== sticky bit still intact
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10696) TestHDFSCLI fails

2016-07-27 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396125#comment-15396125
 ] 

Xiaoyu Yao commented on HDFS-10696:
---

Thanks [~ajisakaa] for reporting the issue and [~lewuathe] for posting the 
patch. 
The fix looks good to me. +1 after the checkstyle issue is addressed.

> TestHDFSCLI fails
> -
>
> Key: HDFS-10696
> URL: https://issues.apache.org/jira/browse/HDFS-10696
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Kai Sasaki
> Attachments: HDFS-10696.01.patch
>
>
> TestHDFSCLI fails.
> {noformat}2016-07-27 19:53:20,790 [main] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(177)) -  Comparator: 
> [RegexpComparator]
> 2016-07-27 19:53:20,790 [main] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(179)) -  Comparision result:   
> [fail]
> 2016-07-27 19:53:20,791 [main] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(181)) - Expected output:   [^( 
> |\t)*The storage type specific quota is cleared when -storageType option is 
> specified.( )*]
> 2016-07-27 19:53:20,791 [main] INFO  cli.CLITestHelper 
> (CLITestHelper.java:displayResults(183)) -   Actual output:   
> [-clrSpaceQuota [-storageType ] ...: Clear the 
> space quota for each directory .
> For each directory, attempt to clear the quota. An error will 
> be reported if
> 1. the directory does not exist or is a file, or
> 2. user is not an administrator.
> It does not fault if the directory has no quota.
> The storage type specific quota is cleared when -storageType 
> option is specified.   Available storageTypes are 
> - RAM_DISK
> - DISK
> - SSD
> - ARCHIVE
> ]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10689) "hdfs dfs -chmod 777" does not remove sticky bit

2016-07-27 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396068#comment-15396068
 ] 

Wei-Chiu Chuang commented on HDFS-10689:


The test failure was filed in HDFS-10696, and it was caused by HDFS-9937.

> "hdfs dfs -chmod 777" does not remove sticky bit
> 
>
> Key: HDFS-10689
> URL: https://issues.apache.org/jira/browse/HDFS-10689
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Attachments: HDFS-10689.001.patch
>
>
> When a directory permission is modified using hdfs dfs chmod command and when 
> octal/numeric format is used, the leading sticky bit is not fully honored.
> 1. Create a dir dir_test_with_sticky_bit
> 2. Apply sticky bit permission on the dir : hdfs dfs -chmod 1755 
> /dir_test_with_sticky_bit
> 3. Remove sticky bit permission on the dir: hdfs dfs -chmod 755 
> /dir_test_with_sticky_bit
> Expected: Remove the sticky bit on the dir, as it happens on Mac/Linux native 
> filesystem with native chmod.
> 4. However, removing sticky bit permission by explicitly turning off the bit 
> works. hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> {noformat}
> manoj@~/work/hadev-pp: hdfs dfs -chmod 1755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-t   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit  <=== sticky bit still intact
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: hdfs dfs -chmod 0755 /dir_test_with_sticky_bit
> manoj@~/work/hadev-pp: hdfs dfs -ls /
> Found 2 items
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 
> /dir_test_with_sticky_bit
> drwxr-xr-x   - manoj supergroup  0 2016-07-25 11:42 /user
> manoj@~/work/hadev-pp: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10609) Uncaught InvalidEncryptionKeyException during pipeline recovery may abort downstream applications

2016-07-27 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-10609:
---
Attachment: HDFS-10609.002.patch

Upload patch v2:
* Fixed the bug which failed other tests in {{TestEncryptedTransfer}} (the 
files should be created with default replication factor=3)
* Updated the test case to use 6 data nodes. Because 
{{DataStreamer#addDatanode2ExistingPipeline}} retries 3 times upon exception, 
using 6 data nodes makes sure that without the fix, the test fails with 
{{InvalidEncryptionKeyException}}, rather than "no more good datanodes" 
exception.
* Updated the fix such that it is contained inside {{DataStreamer#transfer}}

> Uncaught InvalidEncryptionKeyException during pipeline recovery may abort 
> downstream applications
> -
>
> Key: HDFS-10609
> URL: https://issues.apache.org/jira/browse/HDFS-10609
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.0
> Environment: CDH5.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10609.001.patch, HDFS-10609.002.patch
>
>
> In normal operations, if SASL negotiation fails due to 
> {{InvalidEncryptionKeyException}}, it is typically a benign exception, which 
> is caught and retried :
> {code:title=SaslDataTransferServer#doSaslHandshake}
>   if (ioe instanceof SaslException &&
>   ioe.getCause() != null &&
>   ioe.getCause() instanceof InvalidEncryptionKeyException) {
> // This could just be because the client is long-lived and hasn't gotten
> // a new encryption key from the NN in a while. Upon receiving this
> // error, the client will get a new encryption key from the NN and retry
> // connecting to this DN.
> sendInvalidKeySaslErrorMessage(out, ioe.getCause().getMessage());
>   } 
> {code}
> {code:title=DFSOutputStream.DataStreamer#createBlockOutputStream}
> if (ie instanceof InvalidEncryptionKeyException && refetchEncryptionKey > 0) {
> DFSClient.LOG.info("Will fetch a new encryption key and retry, " 
> + "encryption key was invalid when connecting to "
> + nodes[0] + " : " + ie);
> {code}
> However, if the exception is thrown during pipeline recovery, the 
> corresponding code does not handle it properly, and the exception is spilled 
> out to downstream applications, such as SOLR, aborting its operation:
> {quote}
> 2016-07-06 12:12:51,992 ERROR org.apache.solr.update.HdfsTransactionLog: 
> Exception closing tlog.
> org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException: 
> Can't re-compute encryption key for nonce, since the required block key 
> (keyID=557709482) doesn't exist. Current key: 1350592619
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessageAndNegotiatedCipherOption(DataTransferSaslUtil.java:417)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:474)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getEncryptedStreams(SaslDataTransferClient.java:299)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:242)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:211)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:183)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.transfer(DFSOutputStream.java:1308)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1272)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1433)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1147)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:632)
> 2016-07-06 12:12:51,997 ERROR org.apache.solr.update.CommitTracker: auto 
> commit error...:org.apache.solr.common.SolrException: 
> org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException: 
> Can't re-compute encryption key for nonce, since the required block key 
> (keyID=557709482) doesn't exist. Current key: 1350592619
> at 
> org.apache.solr.update.HdfsTransactionLog.close(HdfsTransactionLog.java:316)
> at 
> org.apache.solr.update.TransactionLog.decref(TransactionLog.java:505)
> at org.apache.solr.update.UpdateLog.addOldLog(UpdateLog.java:380)
>  

[jira] [Commented] (HDFS-10689) "hdfs dfs -chmod 777" does not remove sticky bit

2016-07-27 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396059#comment-15396059
 ] 

Manoj Govindassamy commented on HDFS-10689:
---


TestHDFSCLI failure is not related to the patch. Latest trunk is seeing the 
same failure.

{noformat}
17699 2016-07-27 10:30:42,785 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(186)) -
17700 2016-07-27 10:30:42,785 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(190)) - Summary results:
17701 2016-07-27 10:30:42,785 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(191)) - --
17702 
17703 2016-07-27 10:30:42,785 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(211)) -Testing mode: test
17704 2016-07-27 10:30:42,785 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(212)) -
17705 2016-07-27 10:30:42,786 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(213)) -  Overall result: --- 
FAIL ---
17706 2016-07-27 10:30:42,786 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(221)) -# Tests pass: 665 
(99%)
17707 2016-07-27 10:30:42,786 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(223)) -# Tests fail: 1 (0%)
17708 2016-07-27 10:30:42,786 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(227)) -  # Validations done: 1726 
(each test may do mu  ltiple validations)
17709 2016-07-27 10:30:42,786 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(230)) -
17710 2016-07-27 10:30:42,786 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(231)) - Failing tests:
17711 2016-07-27 10:30:42,786 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(232)) - --17712 2016-07-27 
10:30:42,786 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(238)) - 595: help: help for dfsadmin 
clrSpaceQuota
17713 2016-07-27 10:30:42,787 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(248)) -17714 2016-07-27 10:30:42,787 [main] 
INFO  cli.CLITestHelper (CLITestHelper.java:displayResults(249)) - Passing 
tests:
17715 2016-07-27 10:30:42,787 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(250)) - --
17716 2016-07-27 10:30:42,787 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(254)) - 1: ls: file using absolute path
17717 2016-07-27 10:30:42,787 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(254)) - 2: ls: file using relative path



17646 2016-07-27 10:45:22,083 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(177)) -  Comparator: 
[RegexpComparator]
17647 2016-07-27 10:45:22,083 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(179)) -  Comparision result:   [fail]
17648 2016-07-27 10:45:22,083 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(181)) - Expected output:   [^( 
|\t)*The storage ty  pe specific quota is cleared when -storageType option 
is specified.( )*]
17649 2016-07-27 10:45:22,083 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(183)) -   Actual output:   
[-clrSpaceQuota [-stora  geType ] ...: Clear 
the space quota for each directory .
17650 For each directory, attempt to clear the quota. An error 
will be reported if
17651 1. the directory does not exist or is a file, or
17652 2. user is not an administrator.
17653 It does not fault if the directory has no quota.
17654 The storage type specific quota is cleared when 
-storageType option is specified.   Available storageTypes are
17655 - RAM_DISK
17656 - DISK
17657 - SSD
17658 - ARCHIVE

{noformat}





> "hdfs dfs -chmod 777" does not remove sticky bit
> 
>
> Key: HDFS-10689
> URL: https://issues.apache.org/jira/browse/HDFS-10689
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
>Priority: Minor
> Attachments: HDFS-10689.001.patch
>
>
> When a directory permission is modified using hdfs dfs chmod command and when 
> octal/numeric format is used, the leading sticky bit is not fully honored.
> 1. Create a dir dir_test_with_sticky_bit
> 2. Apply sticky bit permission on the dir : hdfs dfs -chmod 1755 
> /dir_test_with_sticky_bit
> 3. Remove sticky bit permission on the dir: hdfs dfs -chmod 755 
> /dir_test_with_sticky_bit
> Expected: Remove the sticky bit on the dir, as it happens on Mac/Linux native 
> 

[jira] [Commented] (HDFS-10655) Fix path related byte array conversion bugs

2016-07-27 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396051#comment-15396051
 ] 

Jing Zhao commented on HDFS-10655:
--

Thanks for the response, [~daryn]. I agree in the current code base whether to 
append "/" does not matter. So I'll leave it to you to decide whether to keep 
the original behavior. Other than this +1.

> Fix path related byte array conversion bugs
> ---
>
> Key: HDFS-10655
> URL: https://issues.apache.org/jira/browse/HDFS-10655
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-10655.patch, HDFS-10655.patch
>
>
> {{DFSUtil.bytes2ByteArray}} does not always properly handle runs of multiple 
> separators, nor does it handle relative paths correctly.
> {{DFSUtil.byteArray2PathString}} does not rebuild the path correctly unless 
> the specified range is the entire component array.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >