[jira] [Commented] (HDFS-10823) Implement HttpFSFileSystem#listStatusIterator

2016-09-14 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15492463#comment-15492463
 ] 

Xiao Chen commented on HDFS-10823:
--

Thanks for the patch Andrew! I didn't find a better alternative, and think this 
makes sense.

Some review comments so far, I will have a more thorough review.
- {{FileSystem#DirectoryEntries}}:
-- member variables could be final.
-- The new {{listStatus}} should {{return new DirectoryEntries(listing, token, 
false);}}
-- How do you feel just calling this new {{listStatus}} sth like 
{{listStatusBatch}}? This is more consistent with the webhdfs/httpfs params, 
and removes the confusion for users on the method overload. 
- {{WebHdfsFileSystem}}
-- seems we can remove the {{DirListingIterator}} there.
- {{HttpFSFileSystem}}
-- Maybe the helper methods can go to {{HttpFSUtils}}?
-- Maybe we could reorganize the code in {{listStatus}} to parse 
{{remainingEntries}} first, then {{newToken}}, so that we don't need 2 
duplicate comments.
- We'll need (a bunch of) documentation.

> Implement HttpFSFileSystem#listStatusIterator
> -
>
> Key: HDFS-10823
> URL: https://issues.apache.org/jira/browse/HDFS-10823
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-10823.001.patch, HDFS-10823.002.patch, 
> HDFS-10823.003.patch
>
>
> Let's expose the same functionality added in HDFS-10784 for WebHDFS in HttpFS 
> too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10489) Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15492134#comment-15492134
 ] 

Hadoop QA commented on HDFS-10489:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  6m 46s{color} 
| {color:red} root generated 3 new + 708 unchanged - 0 fixed = 711 total (was 
708) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 31s{color} | {color:orange} root: The patch generated 1 new + 556 unchanged 
- 0 fixed = 557 total (was 556) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
22s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
6s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
57s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 58m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.TestHdfsConfigFields |
|   | hadoop.hdfs.TestFileCreationDelete |
|   | hadoop.hdfs.TestRenameWhileOpen |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10489 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828566/HDFS-10489.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux e123cc89f692 3.13.0-36-lowlatency 

[jira] [Commented] (HDFS-10862) Typos in 7 log messages

2016-09-14 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15492098#comment-15492098
 ] 

Yiqun Lin commented on HDFS-10862:
--

Hi, [~MehranHassani], thanks for finding so many typos. Are you ready to work 
for this now? If not, I'd like to make a quick fix. Let me know you thought, 
thanks.

> Typos in 7 log messages
> ---
>
> Key: HDFS-10862
> URL: https://issues.apache.org/jira/browse/HDFS-10862
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Mehran Hassani
>Priority: Trivial
>  Labels: newbie
>
> I am conducting research on log related bugs. I tried to make a tool to fix 
> repetitive yet simple patterns of bugs that are related to logs. Typos in log 
> messages are one of the reoccurring bugs. Therefore, I made a tool find typos 
> in log statements. During my experiments, I managed to find the following 
> typos in Hadoop HDFS:
> In file 
> /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java,
>  LOG.info((success ? "S" : "Uns") +"uccessfully sent block report 0x" 
> +Long.toHexString(reportId) + "   containing " + reports.length +" storage 
> report(s)  of which we sent " + numReportsSent + "." +" The reports had " + 
> totalBlockCount +" total blocks and used " + numRPCs +" RPC(s). This took " + 
> brCreateCost +" msec to generate and " + brSendCost +" msecs for RPC and NN 
> processing." +" Got back " +((nCmds == 0) ? "no commands" : ((nCmds == 1) ? 
> "one command: " + cmds.get(0) :(nCmds + " commands: " + Joiner.on("; 
> ").join(cmds +"."), 
> uccessfullysuccessfully
> In file 
> /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java,
>  LOG.info("Balancing bandwith is " + bandwidth + " bytes/s"), 
> bandwith should be bandwidth
> In file 
> /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java,
>  FsDatasetImpl.LOG.info("The volume " + v + " is closed while " +"addng 
> replicas  ignored."), 
> addng should be adding 
> In file 
> /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CancelDelegationTokenServlet.java,
>  LOG.info("Exception while cancelling token. Re-throwing. "  e), 
> cancelling should be canceling
> In file 
> /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java,
>  NameNode.LOG.info("Caching file names occuring more than " + threshold+ " 
> times"), 
> occuring should be occurring
> In file 
> /hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java,
>  LOG.info("NNStorage.attemptRestoreRemovedStorage: check removed(failed) 
> "+"storarge. removedStorages size = " + removedStorageDirs.size()), 
> storarge should be storage
> In file 
> /hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java,
>  LOG.info("Partical read. Asked offset: " + offset + " count: " + count+ " 
> and read back: " + readCount + " file size: "+ attrs.getSize()), 
> Partical should be Partial



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10823) Implement HttpFSFileSystem#listStatusIterator

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15492030#comment-15492030
 ] 

Hadoop QA commented on HDFS-10823:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 18s{color} | {color:orange} root: The patch generated 6 new + 909 unchanged 
- 1 fixed = 915 total (was 910) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
0s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-httpfs generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m  2s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
10s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
22s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-httpfs |
|  |  Nullcheck of token at line 336 of value previously dereferenced in 
org.apache.hadoop.fs.http.server.HttpFSServer.get(String, 
HttpFSParametersProvider$OperationParam, Parameters, HttpServletRequest)  At 
HttpFSServer.java:336 of value previously dereferenced in 
org.apache.hadoop.fs.http.server.HttpFSServer.get(String, 
HttpFSParametersProvider$OperationParam, Parameters, HttpServletRequest)  At 
HttpFSServer.java:[line 333] |
| Failed junit tests | hadoop.ipc.TestIPC |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10823 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828562/HDFS-10823.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b89d3e6836f0 3.13.0-92-generic 

[jira] [Commented] (HDFS-9895) Remove unnecessary conf cache from DataNode

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15492022#comment-15492022
 ] 

Hadoop QA commented on HDFS-9895:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 10 new + 279 unchanged - 16 fixed = 289 total (was 295) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-9895 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828553/HDFS-9895-HDFS-9000.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1f653e9c522c 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2a8f55a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16747/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16747/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16747/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16747/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove unnecessary conf cache 

[jira] [Commented] (HDFS-10637) Modifications to remove the assumption that FsVolumes are backed by java.io.File.

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491925#comment-15491925
 ] 

Hadoop QA commented on HDFS-10637:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 11 new + 998 unchanged - 11 fixed = 1009 total (was 1009) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m  3s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
19s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10637 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828541/HDFS-10637.008.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2a0717c63288 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2a8f55a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16746/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16746/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16746/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16746/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16746/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Modifications to remove the assumption that 

[jira] [Updated] (HDFS-10489) Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones

2016-09-14 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10489:
-
Status: Patch Available  (was: Open)

> Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones
> ---
>
> Key: HDFS-10489
> URL: https://issues.apache.org/jira/browse/HDFS-10489
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-10489.01.patch, HDFS-10489.02.patch
>
>
> When working on HADOOP-13155, we 
> [discussed|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15315117=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15315117]
>  and concluded that we should use the common config key for key provider uri.
> We can depreate the dfs. key for 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10489) Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones

2016-09-14 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491923#comment-15491923
 ] 

Xiao Chen commented on HDFS-10489:
--

Thanks for the input, Andrew! It totally makes sense to me, attached a patch 2 
to deprecate it.

Another thing here is, I added a fall back logic on current usage of the dfs 
key, to also try the hadoop key before fail out. This is incompatible, but 
would make the 3.x to 4.x upgrade friendlier, since people can choose to use 
the hadoop key in 3, and theoretically have a transparent upgrade to 4.

> Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones
> ---
>
> Key: HDFS-10489
> URL: https://issues.apache.org/jira/browse/HDFS-10489
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-10489.01.patch, HDFS-10489.02.patch
>
>
> When working on HADOOP-13155, we 
> [discussed|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15315117=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15315117]
>  and concluded that we should use the common config key for key provider uri.
> We can depreate the dfs. key for 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10489) Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones

2016-09-14 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-10489:
-
Attachment: HDFS-10489.02.patch

> Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones
> ---
>
> Key: HDFS-10489
> URL: https://issues.apache.org/jira/browse/HDFS-10489
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-10489.01.patch, HDFS-10489.02.patch
>
>
> When working on HADOOP-13155, we 
> [discussed|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15315117=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15315117]
>  and concluded that we should use the common config key for key provider uri.
> We can depreate the dfs. key for 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10823) Implement HttpFSFileSystem#listStatusIterator

2016-09-14 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10823:
---
Attachment: (was: HDFS-10823.003.patch)

> Implement HttpFSFileSystem#listStatusIterator
> -
>
> Key: HDFS-10823
> URL: https://issues.apache.org/jira/browse/HDFS-10823
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-10823.001.patch, HDFS-10823.002.patch, 
> HDFS-10823.003.patch
>
>
> Let's expose the same functionality added in HDFS-10784 for WebHDFS in HttpFS 
> too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10823) Implement HttpFSFileSystem#listStatusIterator

2016-09-14 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10823:
---
Attachment: HDFS-10823.003.patch

Woops, consolidated patch attached.

> Implement HttpFSFileSystem#listStatusIterator
> -
>
> Key: HDFS-10823
> URL: https://issues.apache.org/jira/browse/HDFS-10823
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-10823.001.patch, HDFS-10823.002.patch, 
> HDFS-10823.003.patch
>
>
> Let's expose the same functionality added in HDFS-10784 for WebHDFS in HttpFS 
> too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10823) Implement HttpFSFileSystem#listStatusIterator

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491863#comment-15491863
 ] 

Hadoop QA commented on HDFS-10823:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDFS-10823 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-10823 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828560/HDFS-10823.003.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16748/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Implement HttpFSFileSystem#listStatusIterator
> -
>
> Key: HDFS-10823
> URL: https://issues.apache.org/jira/browse/HDFS-10823
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-10823.001.patch, HDFS-10823.002.patch, 
> HDFS-10823.003.patch
>
>
> Let's expose the same functionality added in HDFS-10784 for WebHDFS in HttpFS 
> too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10823) Implement HttpFSFileSystem#listStatusIterator

2016-09-14 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10823:
---
Attachment: HDFS-10823.003.patch

One more pass over precommit.

> Implement HttpFSFileSystem#listStatusIterator
> -
>
> Key: HDFS-10823
> URL: https://issues.apache.org/jira/browse/HDFS-10823
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-10823.001.patch, HDFS-10823.002.patch, 
> HDFS-10823.003.patch
>
>
> Let's expose the same functionality added in HDFS-10784 for WebHDFS in HttpFS 
> too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10489) Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones

2016-09-14 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491851#comment-15491851
 ] 

Andrew Wang commented on HDFS-10489:


We can certainly deprecate 2.x and remove for 3.x if we wish, but I'm wondering 
if there's value to removing over just deprecating.

Typically we remove in situations like:

# Implementation has changed such that there's no good way of translating the 
old key -> new key.
# If there's a maintenance burden to keeping the old key around
# It's confusing for users. We should try to minimize surprise.

I believe the first two do not apply here.

Regarding users, let's say a 2.x client is using the 
{{dfs.encryption.key.provider.uri}} config key. Then, they upgrade to 3.x. If 
we remove the dfs key, they'll get an error that {{hadoop.security...}} is not 
configured, and need to change that key name over.

Alternatively, if we add a DeprecationDelta, the old key will be forwarded to 
the new one, a warning will go into their log file, but the client will still 
work. I feel this is a nicer user experience.

> Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones
> ---
>
> Key: HDFS-10489
> URL: https://issues.apache.org/jira/browse/HDFS-10489
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-10489.01.patch
>
>
> When working on HADOOP-13155, we 
> [discussed|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15315117=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15315117]
>  and concluded that we should use the common config key for key provider uri.
> We can depreate the dfs. key for 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9895) Remove unnecessary conf cache from DataNode

2016-09-14 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491812#comment-15491812
 ] 

Xiaobing Zhou commented on HDFS-9895:
-

Thank you [~arpiagariu] for review. v002 is posted. It changed dn member of 
DNConf as type of Configurable, since only Configurable#getConf is called.

> Remove unnecessary conf cache from DataNode
> ---
>
> Key: HDFS-9895
> URL: https://issues.apache.org/jira/browse/HDFS-9895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9895-HDFS-9000.002.patch, HDFS-9895.000.patch, 
> HDFS-9895.001.patch
>
>
> Since DataNode inherits ReconfigurableBase with Configured as base class 
> where configuration is maintained, DataNode#conf should be removed for the 
> purpose of brevity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9895) Remove unnecessary conf cache from DataNode

2016-09-14 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9895:

Attachment: HDFS-9895-HDFS-9000.002.patch

> Remove unnecessary conf cache from DataNode
> ---
>
> Key: HDFS-9895
> URL: https://issues.apache.org/jira/browse/HDFS-9895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9895-HDFS-9000.002.patch, HDFS-9895.000.patch, 
> HDFS-9895.001.patch
>
>
> Since DataNode inherits ReconfigurableBase with Configured as base class 
> where configuration is maintained, DataNode#conf should be removed for the 
> purpose of brevity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9895) Remove unnecessary conf cache from DataNode

2016-09-14 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9895:

Attachment: HDFS-9895.002.patch

> Remove unnecessary conf cache from DataNode
> ---
>
> Key: HDFS-9895
> URL: https://issues.apache.org/jira/browse/HDFS-9895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9895.000.patch, HDFS-9895.001.patch
>
>
> Since DataNode inherits ReconfigurableBase with Configured as base class 
> where configuration is maintained, DataNode#conf should be removed for the 
> purpose of brevity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9895) Remove unnecessary conf cache from DataNode

2016-09-14 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9895:

Attachment: (was: HDFS-9895.002.patch)

> Remove unnecessary conf cache from DataNode
> ---
>
> Key: HDFS-9895
> URL: https://issues.apache.org/jira/browse/HDFS-9895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9895.000.patch, HDFS-9895.001.patch
>
>
> Since DataNode inherits ReconfigurableBase with Configured as base class 
> where configuration is maintained, DataNode#conf should be removed for the 
> purpose of brevity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9895) Remove unnecessary conf cache in DataNode

2016-09-14 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9895:

Summary: Remove unnecessary conf cache in DataNode  (was: Push up 
DataNode#conf to base class)

> Remove unnecessary conf cache in DataNode
> -
>
> Key: HDFS-9895
> URL: https://issues.apache.org/jira/browse/HDFS-9895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9895.000.patch, HDFS-9895.001.patch
>
>
> Since DataNode inherits ReconfigurableBase with Configured as base class 
> where configuration is maintained, DataNode#conf should be removed for the 
> purpose of brevity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9895) Remove unnecessary conf cache from DataNode

2016-09-14 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-9895:

Summary: Remove unnecessary conf cache from DataNode  (was: Remove 
unnecessary conf cache in DataNode)

> Remove unnecessary conf cache from DataNode
> ---
>
> Key: HDFS-9895
> URL: https://issues.apache.org/jira/browse/HDFS-9895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-9895.000.patch, HDFS-9895.001.patch
>
>
> Since DataNode inherits ReconfigurableBase with Configured as base class 
> where configuration is maintained, DataNode#conf should be removed for the 
> purpose of brevity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10489) Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones

2016-09-14 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491803#comment-15491803
 ] 

Xiao Chen commented on HDFS-10489:
--

Thanks for the explanation, so this jira will deprecate in 3.x, and remove it 
in 4.x, right? I was under the wrong impression that this will be done for 2.x 
/ 3.x, respectively.
Will update a new patch soon. Thanks Andrew.

> Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones
> ---
>
> Key: HDFS-10489
> URL: https://issues.apache.org/jira/browse/HDFS-10489
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-10489.01.patch
>
>
> When working on HADOOP-13155, we 
> [discussed|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15315117=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15315117]
>  and concluded that we should use the common config key for key provider uri.
> We can depreate the dfs. key for 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10823) Implement HttpFSFileSystem#listStatusIterator

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491771#comment-15491771
 ] 

Hadoop QA commented on HDFS-10823:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 35s{color} | {color:orange} root: The patch generated 8 new + 909 unchanged 
- 1 fixed = 917 total (was 910) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-httpfs generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
30s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
30s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-httpfs |
|  |  new 
org.apache.hadoop.fs.http.server.FSOperations$FSListStatusBatch(String, byte[]) 
may expose internal representation by storing an externally mutable object into 
FSOperations$FSListStatusBatch.token  At FSOperations.java:internal 
representation by storing an externally mutable object into 
FSOperations$FSListStatusBatch.token  At FSOperations.java:[line 661] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10823 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828538/HDFS-10823.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b52af3e5424b 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven 

[jira] [Commented] (HDFS-10745) Directly resolve paths into INodesInPath

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491737#comment-15491737
 ] 

Hadoop QA commented on HDFS-10745:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
18s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
59s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 539 unchanged - 5 fixed = 546 total (was 544) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2245 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
58s{color} | {color:red} The patch 109 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
14s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 57s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
19s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Dead store to src in 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirectory,
 String, boolean)  At 

[jira] [Commented] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491738#comment-15491738
 ] 

Hadoop QA commented on HDFS-10824:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 205 unchanged - 1 fixed = 206 total (was 206) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestRead |
|   | hadoop.hdfs.TestDFSStorageStateRecovery |
|   | hadoop.hdfs.TestPread |
|   | hadoop.hdfs.server.datanode.TestReadOnlySharedStorage |
|   | hadoop.hdfs.TestDFSStripedInputStream |
|   | hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.server.datanode.TestDataNodeTransferSocketSize |
|   | hadoop.hdfs.TestDFSRollback |
|   | hadoop.hdfs.TestDFSUpgrade |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer |
|   | hadoop.hdfs.TestSetrepIncreasing |
|   | hadoop.hdfs.TestRenameWhileOpen |
|   | hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
|   | hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | hadoop.hdfs.TestInjectionForSimulatedStorage |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.TestDecommission |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.datanode.TestDataNodeInitStorage |
|   | hadoop.hdfs.TestFileCreation |
|   | hadoop.hdfs.TestWriteBlockGetsBlockLengthHint |
|   | hadoop.hdfs.server.namenode.TestFileLimit |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.TestSmallBlock |
|   | hadoop.hdfs.TestReplication |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | 

[jira] [Commented] (HDFS-10745) Directly resolve paths into INodesInPath

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491729#comment-15491729
 ] 

Hadoop QA commented on HDFS-10745:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
42s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
51s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 539 unchanged - 5 fixed = 546 total (was 544) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2245 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
56s{color} | {color:red} The patch 109 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
32s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_111. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
18s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Dead store to src in 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirectory,
 String, boolean)  At 

[jira] [Commented] (HDFS-10489) Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones

2016-09-14 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491727#comment-15491727
 ] 

Andrew Wang commented on HDFS-10489:


Deprecation typically means adding a DeprecationDelta so that it transitions 
over to the new, preferred key. In cases like this, there might not be much 
benefit from outright removing the key compared to just deprecating it since 
the config value is the same. 

> Deprecate dfs.encryption.key.provider.uri for HDFS encryption zones
> ---
>
> Key: HDFS-10489
> URL: https://issues.apache.org/jira/browse/HDFS-10489
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.6.4
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HDFS-10489.01.patch
>
>
> When working on HADOOP-13155, we 
> [discussed|https://issues.apache.org/jira/browse/HADOOP-13155?focusedCommentId=15315117=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15315117]
>  and concluded that we should use the common config key for key provider uri.
> We can depreate the dfs. key for 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10637) Modifications to remove the assumption that FsVolumes are backed by java.io.File.

2016-09-14 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10637:
--
Status: Patch Available  (was: Open)

> Modifications to remove the assumption that FsVolumes are backed by 
> java.io.File.
> -
>
> Key: HDFS-10637
> URL: https://issues.apache.org/jira/browse/HDFS-10637
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-10637.001.patch, HDFS-10637.002.patch, 
> HDFS-10637.003.patch, HDFS-10637.004.patch, HDFS-10637.005.patch, 
> HDFS-10637.006.patch, HDFS-10637.007.patch, HDFS-10637.008.patch
>
>
> Modifications to {{FsVolumeSpi}} and {{FsVolumeImpl}} to remove references to 
> {{java.io.File}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10637) Modifications to remove the assumption that FsVolumes are backed by java.io.File.

2016-09-14 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10637:
--
Attachment: HDFS-10637.008.patch

> Modifications to remove the assumption that FsVolumes are backed by 
> java.io.File.
> -
>
> Key: HDFS-10637
> URL: https://issues.apache.org/jira/browse/HDFS-10637
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-10637.001.patch, HDFS-10637.002.patch, 
> HDFS-10637.003.patch, HDFS-10637.004.patch, HDFS-10637.005.patch, 
> HDFS-10637.006.patch, HDFS-10637.007.patch, HDFS-10637.008.patch
>
>
> Modifications to {{FsVolumeSpi}} and {{FsVolumeImpl}} to remove references to 
> {{java.io.File}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10843) Quota Feature Cached Size != Computed Size When Block Committed But Not Completed

2016-09-14 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491623#comment-15491623
 ] 

Erik Krogen commented on HDFS-10843:


But you shouldn't be calling {{BlockInfo.convertToCompleteBlock}} anyway, you 
should be calling {{BlockManager.completeBlock}}. {{completeBlock}} has other 
operations, e.g. checking for minimum replication and updating the block 
totals, that must be completed every time a block is completed. A separate 
{{BlockManager.convertToCompleteBlock}} method may imply that 
{{convertToCompleteBlock}} is an acceptable way to complete a block (bypassing 
the other code in {{completeBlock}}) when really the only acceptable way is 
through {{BlockManager.completeBlock}}.  

> Quota Feature Cached Size != Computed Size When Block Committed But Not 
> Completed
> -
>
> Key: HDFS-10843
> URL: https://issues.apache.org/jira/browse/HDFS-10843
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-10843.000.patch, HDFS-10843.001.patch, 
> HDFS-10843.002.patch, HDFS-10843.003.patch, HDFS-10843.004.patch
>
>
> Currently when a block has been committed but has not yet been completed, the 
> cached size (used for the quota feature) of the directory containing that 
> block differs from the computed size. This results in log messages of the 
> following form:
> bq. ERROR namenode.NameNode 
> (DirectoryWithQuotaFeature.java:checkStoragespace(141)) - BUG: Inconsistent 
> storagespace for directory /TestQuotaUpdate. Cached = 512 != Computed = 8192
> When a block is initially started under construction, the used space is 
> conservatively set to a full block. When the block is committed, the cached 
> size is updated to the final size of the block. However, the calculation of 
> the computed size uses the full block size until the block is completed, so 
> in the period where the block is committed but not completed they disagree. 
> To fix this we need to decide which is correct and fix the other to match. It 
> seems to me that the cached size is correct since once the block is committed 
> its size will not change. 
> This can be reproduced using the following steps:
> - Create a directory with a quota
> - Start writing to a file within this directory
> - Prevent all datanodes to which the file is written from communicating the 
> corresponding BlockReceivedAndDeletedRequestProto to the NN temporarily (i.e. 
> simulate a transient network partition/delay)
> - During this time, call DistributedFileSystem.getContentSummary() on the 
> directory with the quota



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10823) Implement HttpFSFileSystem#listStatusIterator

2016-09-14 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10823:
---
Attachment: HDFS-10823.002.patch

New patch attached. Fixed up the precommit checks. Also downgraded the new 
FileSystem method to protected, and use a wrapper FS to access the batched 
listing API where we need it in HttpFS.

> Implement HttpFSFileSystem#listStatusIterator
> -
>
> Key: HDFS-10823
> URL: https://issues.apache.org/jira/browse/HDFS-10823
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: HDFS-10823.001.patch, HDFS-10823.002.patch
>
>
> Let's expose the same functionality added in HDFS-10784 for WebHDFS in HttpFS 
> too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10843) Quota Feature Cached Size != Computed Size When Block Committed But Not Completed

2016-09-14 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491505#comment-15491505
 ] 

Erik Krogen edited comment on HDFS-10843 at 9/14/16 10:26 PM:
--

What advantage is there to wrapping those two specific calls in a new method 
{{BlockManager.convertToCompleteBlock}} rather than just calling both from 
within {{BlockManager.completeBlock}} as in patch v003? I think the distinction 
between the two methods would be a little confusing. 


was (Author: xkrogen):
What advantage is there to wrapping those two specific calls in a new method 
{{BlockManager.convertToCompleteBlock}} rather than just calling both from 
within {{BlockManager.completeBlock}} as in patch v003?? I think the 
distinction between the two methods would be a little confusing. 

> Quota Feature Cached Size != Computed Size When Block Committed But Not 
> Completed
> -
>
> Key: HDFS-10843
> URL: https://issues.apache.org/jira/browse/HDFS-10843
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-10843.000.patch, HDFS-10843.001.patch, 
> HDFS-10843.002.patch, HDFS-10843.003.patch, HDFS-10843.004.patch
>
>
> Currently when a block has been committed but has not yet been completed, the 
> cached size (used for the quota feature) of the directory containing that 
> block differs from the computed size. This results in log messages of the 
> following form:
> bq. ERROR namenode.NameNode 
> (DirectoryWithQuotaFeature.java:checkStoragespace(141)) - BUG: Inconsistent 
> storagespace for directory /TestQuotaUpdate. Cached = 512 != Computed = 8192
> When a block is initially started under construction, the used space is 
> conservatively set to a full block. When the block is committed, the cached 
> size is updated to the final size of the block. However, the calculation of 
> the computed size uses the full block size until the block is completed, so 
> in the period where the block is committed but not completed they disagree. 
> To fix this we need to decide which is correct and fix the other to match. It 
> seems to me that the cached size is correct since once the block is committed 
> its size will not change. 
> This can be reproduced using the following steps:
> - Create a directory with a quota
> - Start writing to a file within this directory
> - Prevent all datanodes to which the file is written from communicating the 
> corresponding BlockReceivedAndDeletedRequestProto to the NN temporarily (i.e. 
> simulate a transient network partition/delay)
> - During this time, call DistributedFileSystem.getContentSummary() on the 
> directory with the quota



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10843) Quota Feature Cached Size != Computed Size When Block Committed But Not Completed

2016-09-14 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491607#comment-15491607
 ] 

Konstantin Shvachko commented on HDFS-10843:


I am just trying to make the coupling of completing a block and changing space 
consumed more explicit.
With current patch one should _remember_ to {{updateSpaceForCompleteBlock(}} 
whenever the block is completed {{BlockInfo.convertToCompleteBlock()}}. If we 
wrap them together  you don't need to remember, just call the new method. 
Minimizing chances to make a mistake for others.

> Quota Feature Cached Size != Computed Size When Block Committed But Not 
> Completed
> -
>
> Key: HDFS-10843
> URL: https://issues.apache.org/jira/browse/HDFS-10843
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-10843.000.patch, HDFS-10843.001.patch, 
> HDFS-10843.002.patch, HDFS-10843.003.patch, HDFS-10843.004.patch
>
>
> Currently when a block has been committed but has not yet been completed, the 
> cached size (used for the quota feature) of the directory containing that 
> block differs from the computed size. This results in log messages of the 
> following form:
> bq. ERROR namenode.NameNode 
> (DirectoryWithQuotaFeature.java:checkStoragespace(141)) - BUG: Inconsistent 
> storagespace for directory /TestQuotaUpdate. Cached = 512 != Computed = 8192
> When a block is initially started under construction, the used space is 
> conservatively set to a full block. When the block is committed, the cached 
> size is updated to the final size of the block. However, the calculation of 
> the computed size uses the full block size until the block is completed, so 
> in the period where the block is committed but not completed they disagree. 
> To fix this we need to decide which is correct and fix the other to match. It 
> seems to me that the cached size is correct since once the block is committed 
> its size will not change. 
> This can be reproduced using the following steps:
> - Create a directory with a quota
> - Start writing to a file within this directory
> - Prevent all datanodes to which the file is written from communicating the 
> corresponding BlockReceivedAndDeletedRequestProto to the NN temporarily (i.e. 
> simulate a transient network partition/delay)
> - During this time, call DistributedFileSystem.getContentSummary() on the 
> directory with the quota



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10745) Directly resolve paths into INodesInPath

2016-09-14 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10745:
-
Status: Patch Available  (was: Reopened)

> Directly resolve paths into INodesInPath
> 
>
> Key: HDFS-10745
> URL: https://issues.apache.org/jira/browse/HDFS-10745
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10745-branch-2.7.patch, HDFS-10745.2.patch, 
> HDFS-10745.branch-2.patch, HDFS-10745.patch
>
>
> The intermediate resolution to a string, only to be decomposed by 
> {{INodesInPath}} back into a byte[][] can be eliminated by resolving directly 
> to an IIP.  The IIP will contain the resolved path if required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10745) Directly resolve paths into INodesInPath

2016-09-14 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-10745:
-
Attachment: HDFS-10745-branch-2.7.patch

Backport to branch-2.7 was quite unclean. I'm taking another look at the patch 
while Jenkins runs.

[~kihwal] [~daryn] Much appreciated if you could take a look at the 2.7 patch.

> Directly resolve paths into INodesInPath
> 
>
> Key: HDFS-10745
> URL: https://issues.apache.org/jira/browse/HDFS-10745
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10745-branch-2.7.patch, HDFS-10745.2.patch, 
> HDFS-10745.branch-2.patch, HDFS-10745.patch
>
>
> The intermediate resolution to a string, only to be decomposed by 
> {{INodesInPath}} back into a byte[][] can be eliminated by resolving directly 
> to an IIP.  The IIP will contain the resolved path if required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-10745) Directly resolve paths into INodesInPath

2016-09-14 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang reopened HDFS-10745:
--

Sorry to reopen the JIRA, testing branch-2.7 patch.

> Directly resolve paths into INodesInPath
> 
>
> Key: HDFS-10745
> URL: https://issues.apache.org/jira/browse/HDFS-10745
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HDFS-10745.2.patch, HDFS-10745.branch-2.patch, 
> HDFS-10745.patch
>
>
> The intermediate resolution to a string, only to be decomposed by 
> {{INodesInPath}} back into a byte[][] can be eliminated by resolving directly 
> to an IIP.  The IIP will contain the resolved path if required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10475) Adding metrics for long FSNamesystem read and write locks

2016-09-14 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491551#comment-15491551
 ] 

Zhe Zhang commented on HDFS-10475:
--

Nice discussion guys.

A quick note from looking at the code:
{code}
  public Writable call(RPC.Server server, String protocol,
  Writable writableRequest, long receiveTime) throws Exception {
...
   long startTime = Time.now();
int qTime = (int) (startTime - receiveTime);
...
try {
  server.rpcDetailedMetrics.init(protocolImpl.protocolClass);
  result = service.callBlockingMethod(methodDescriptor, null, param);
...
} finally {
  int processingTime = (int) (Time.now() - startTime);
...
  server.updateMetrics(detailedMetricsName, qTime, processingTime);
{code}

So the reported queue time is from when the RPC request enters *RPC queue* to 
the time it exits the queue. The processing time is from the time it exists RPC 
queue until the request is completed. The proposed _lock time_ will be part of 
the _processing time_.

> Adding metrics for long FSNamesystem read and write locks
> -
>
> Key: HDFS-10475
> URL: https://issues.apache.org/jira/browse/HDFS-10475
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Erik Krogen
>
> This is a follow up of the comment on HADOOP-12916 and 
> [here|https://issues.apache.org/jira/browse/HDFS-9924?focusedCommentId=15310837=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15310837]
>  add more metrics and WARN/DEBUG logs for long FSD/FSN locking operations on 
> namenode similar to what we have for slow write/network WARN/metrics on 
> datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10862) Typos in 7 log messages

2016-09-14 Thread Mehran Hassani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehran Hassani updated HDFS-10862:
--
Description: 
I am conducting research on log related bugs. I tried to make a tool to fix 
repetitive yet simple patterns of bugs that are related to logs. Typos in log 
messages are one of the reoccurring bugs. Therefore, I made a tool find typos 
in log statements. During my experiments, I managed to find the following typos 
in Hadoop HDFS:

In file 
/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java,
 LOG.info((success ? "S" : "Uns") +"uccessfully sent block report 0x" 
+Long.toHexString(reportId) + "   containing " + reports.length +" storage 
report(s)  of which we sent " + numReportsSent + "." +" The reports had " + 
totalBlockCount +" total blocks and used " + numRPCs +" RPC(s). This took " + 
brCreateCost +" msec to generate and " + brSendCost +" msecs for RPC and NN 
processing." +" Got back " +((nCmds == 0) ? "no commands" : ((nCmds == 1) ? 
"one command: " + cmds.get(0) :(nCmds + " commands: " + Joiner.on("; 
").join(cmds +"."), 
uccessfully  successfully

In file 
/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java,
 LOG.info("Balancing bandwith is " + bandwidth + " bytes/s"), 
bandwith should be bandwidth

In file 
/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java,
 FsDatasetImpl.LOG.info("The volume " + v + " is closed while " +"addng 
replicas  ignored."), 
addng should be adding 

In file 
/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CancelDelegationTokenServlet.java,
 LOG.info("Exception while cancelling token. Re-throwing. "  e), 
cancelling should be canceling

In file 
/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java,
 NameNode.LOG.info("Caching file names occuring more than " + threshold+ " 
times"), 
occuring should be occurring

In file 
/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java,
 LOG.info("NNStorage.attemptRestoreRemovedStorage: check removed(failed) 
"+"storarge. removedStorages size = " + removedStorageDirs.size()), 
storarge should be storage

In file 
/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/RpcProgramNfs3.java,
 LOG.info("Partical read. Asked offset: " + offset + " count: " + count+ " and 
read back: " + readCount + " file size: "+ attrs.getSize()), 
Partical should be Partial

  was:
I am conducting research on log related bugs. I tried to make a tool to fix 
repetitive yet simple patterns of bugs that are related to logs. Typos in log 
messages are one of the reoccurring bugs. Therefore, I made a tool find typos 
in log statements. During my experiments, I managed to find the following typos 
in Hadoop HDFS:

In file 
/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java,
 LOG.info((success ? "S" : "Uns") +"uccessfully sent block report 0x" 
+Long.toHexString(reportId) + "   containing " + reports.length +" storage 
report(s)  of which we sent " + numReportsSent + "." +" The reports had " + 
totalBlockCount +" total blocks and used " + numRPCs +" RPC(s). This took " + 
brCreateCost +" msec to generate and " + brSendCost +" msecs for RPC and NN 
processing." +" Got back " +((nCmds == 0) ? "no commands" :((nCmds == 1) ? "one 
command: " + cmds.get(0) :(nCmds + " commands: " + Joiner.on("; 
").join(cmds +"."), 
uccessfully  successfully

In file 
/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java,
 LOG.info("Balancing bandwith is " + bandwidth + " bytes/s"), 
bandwith should be bandwidth

In file 
/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeList.java,
 FsDatasetImpl.LOG.info("The volume " + v + " is closed while " +"addng 
replicas  ignored."), 
addng should be adding 

In file 
/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CancelDelegationTokenServlet.java,
 LOG.info("Exception while cancelling token. Re-throwing. "  e), 
cancelling should be canceling

In file 
/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java,
 NameNode.LOG.info("Caching file names occuring more than " + threshold+ " 
times"), 
occuring should be occurring

In file 
/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NNStorage.java,
 LOG.info("NNStorage.attemptRestoreRemovedStorage: check removed(failed) 
"+"storarge. removedStorages size = " + removedStorageDirs.size()), 
storarge should be storage

In file 

[jira] [Commented] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity

2016-09-14 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491523#comment-15491523
 ] 

Xiaobing Zhou commented on HDFS-10824:
--

Thanks for review [~anu]. v002 is posted.
1. The member is named as storageCap to avoid edits. storageCapacities in 
function startDataNodes is intended for starting additional DNs. so memorizing 
capacity is changed accordingly. 
2. It's better not to remove storageCapacities parameters, since startDataNodes 
is designed to start additional in on-going cluster by providing diff 
capacities.
3. tiggerHeartbeat is to wait for for local DN storage to be initialized after 
block pool has successfully connected to its NN. See also 
DataNode#runDatanodeDaemon -> blockPoolManager.startAll() --> 
BPOfferService.start --> BPServiceActor.start --> BPServiceActor.run 
-->BPServiceActor.connectToNNAndHandshake, storage initialization is triggered 
async. tiggerHeartbeat is necessary in this case, although triggerBlock not.
4. passed different capacities.

> MiniDFSCluster#storageCapacities has no effects on real capacity
> 
>
> Key: HDFS-10824
> URL: https://issues.apache.org/jira/browse/HDFS-10824
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10824.000.patch, HDFS-10824.001.patch, 
> HDFS-10824.002.patch
>
>
> It has been noticed MiniDFSCluster#storageCapacities has no effects on real 
> capacity. It can be reproduced by explicitly setting storageCapacities and 
> then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to 
> compare results. The following are  storage report for one node with two 
> volumes after I set capacity as 300 * 1024. Apparently, the capacity is not 
> changed.
> adminState|DatanodeInfo$AdminStates  (id=6861)
> |blockPoolUsed|215192|
> |cacheCapacity|0|
> |cacheUsed|0|
> |capacity|998164971520|
> |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)|
> |dependentHostNames|LinkedList  (id=6863)|
> |dfsUsed|215192|
> |hostName|"127.0.0.1" (id=6864)|
> |infoPort|64222|
> |infoSecurePort|0|
> |ipAddr|"127.0.0.1" (id=6865)|
> |ipcPort|64223|
> |lastUpdate|1472682790948|
> |lastUpdateMonotonic|209605640|
> |level|0|
> |location|"/default-rack" (id=6866)|
> |maintenanceExpireTimeInMS|0|
> |parent|null|
> |peerHostName|null|
> |remaining|20486512640|
> |softwareVersion|null|
> |upgradeDomain|null|
> |xceiverCount|1|
> |xferAddr|"127.0.0.1:64220" (id=6855)|
> |xferPort|64220|
> [0]StorageReport  (id=6856)
> |blockPoolUsed|4096|
> |capacity|499082485760|
> |dfsUsed|4096|
> |failed|false|
> |remaining|10243256320|
> |storage|DatanodeStorage  (id=6869)|
> [1]StorageReport  (id=6859)
> |blockPoolUsed|211096|
> |capacity|499082485760|
> |dfsUsed|211096|
> |failed|false|
> |remaining|10243256320|
> |storage|DatanodeStorage  (id=6872)|



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-09-14 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491507#comment-15491507
 ] 

Arpit Agarwal edited comment on HDFS-10301 at 9/14/16 9:34 PM:
---

I don't think it is safe to remove storages (and hence block replicas from 
memory) when the NameNode doesn't have up to date block replica state because 
the block->storage mapping on the NameNode can be stale e.g. due to disk 
balancer moving replicas; or due to the way VolumeChoosingPolicy picks storages 
for new blocks.


was (Author: arpitagarwal):
I don't think it is safe to remove storages (and hence blocks) when the 
NameNode doesn't have up to date block replica state because the block->storage 
mapping on the NameNode can be stale e.g. due to disk balancer moving replicas; 
or due to the way VolumeChoosingPolicy picks storages for new blocks.

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, 
> HDFS-10301.012.patch, HDFS-10301.013.patch, HDFS-10301.014.patch, 
> HDFS-10301.branch-2.7.patch, HDFS-10301.branch-2.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10843) Quota Feature Cached Size != Computed Size When Block Committed But Not Completed

2016-09-14 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491505#comment-15491505
 ] 

Erik Krogen commented on HDFS-10843:


What advantage is there to wrapping those two specific calls in a new method 
{{BlockManager.convertToCompleteBlock}} rather than just calling both from 
within {{BlockManager.completeBlock}} as in patch v003?? I think the 
distinction between the two methods would be a little confusing. 

> Quota Feature Cached Size != Computed Size When Block Committed But Not 
> Completed
> -
>
> Key: HDFS-10843
> URL: https://issues.apache.org/jira/browse/HDFS-10843
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-10843.000.patch, HDFS-10843.001.patch, 
> HDFS-10843.002.patch, HDFS-10843.003.patch, HDFS-10843.004.patch
>
>
> Currently when a block has been committed but has not yet been completed, the 
> cached size (used for the quota feature) of the directory containing that 
> block differs from the computed size. This results in log messages of the 
> following form:
> bq. ERROR namenode.NameNode 
> (DirectoryWithQuotaFeature.java:checkStoragespace(141)) - BUG: Inconsistent 
> storagespace for directory /TestQuotaUpdate. Cached = 512 != Computed = 8192
> When a block is initially started under construction, the used space is 
> conservatively set to a full block. When the block is committed, the cached 
> size is updated to the final size of the block. However, the calculation of 
> the computed size uses the full block size until the block is completed, so 
> in the period where the block is committed but not completed they disagree. 
> To fix this we need to decide which is correct and fix the other to match. It 
> seems to me that the cached size is correct since once the block is committed 
> its size will not change. 
> This can be reproduced using the following steps:
> - Create a directory with a quota
> - Start writing to a file within this directory
> - Prevent all datanodes to which the file is written from communicating the 
> corresponding BlockReceivedAndDeletedRequestProto to the NN temporarily (i.e. 
> simulate a transient network partition/delay)
> - During this time, call DistributedFileSystem.getContentSummary() on the 
> directory with the quota



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-09-14 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491507#comment-15491507
 ] 

Arpit Agarwal commented on HDFS-10301:
--

I don't think it is safe to remove storages (and hence blocks) when the 
NameNode doesn't have up to date block replica state because the block->storage 
mapping on the NameNode can be stale e.g. due to disk balancer moving replicas; 
or due to the way VolumeChoosingPolicy picks storages for new blocks.

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, 
> HDFS-10301.012.patch, HDFS-10301.013.patch, HDFS-10301.014.patch, 
> HDFS-10301.branch-2.7.patch, HDFS-10301.branch-2.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity

2016-09-14 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10824:
-
Attachment: HDFS-10824.002.patch

> MiniDFSCluster#storageCapacities has no effects on real capacity
> 
>
> Key: HDFS-10824
> URL: https://issues.apache.org/jira/browse/HDFS-10824
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10824.000.patch, HDFS-10824.001.patch, 
> HDFS-10824.002.patch
>
>
> It has been noticed MiniDFSCluster#storageCapacities has no effects on real 
> capacity. It can be reproduced by explicitly setting storageCapacities and 
> then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to 
> compare results. The following are  storage report for one node with two 
> volumes after I set capacity as 300 * 1024. Apparently, the capacity is not 
> changed.
> adminState|DatanodeInfo$AdminStates  (id=6861)
> |blockPoolUsed|215192|
> |cacheCapacity|0|
> |cacheUsed|0|
> |capacity|998164971520|
> |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)|
> |dependentHostNames|LinkedList  (id=6863)|
> |dfsUsed|215192|
> |hostName|"127.0.0.1" (id=6864)|
> |infoPort|64222|
> |infoSecurePort|0|
> |ipAddr|"127.0.0.1" (id=6865)|
> |ipcPort|64223|
> |lastUpdate|1472682790948|
> |lastUpdateMonotonic|209605640|
> |level|0|
> |location|"/default-rack" (id=6866)|
> |maintenanceExpireTimeInMS|0|
> |parent|null|
> |peerHostName|null|
> |remaining|20486512640|
> |softwareVersion|null|
> |upgradeDomain|null|
> |xceiverCount|1|
> |xferAddr|"127.0.0.1:64220" (id=6855)|
> |xferPort|64220|
> [0]StorageReport  (id=6856)
> |blockPoolUsed|4096|
> |capacity|499082485760|
> |dfsUsed|4096|
> |failed|false|
> |remaining|10243256320|
> |storage|DatanodeStorage  (id=6869)|
> [1]StorageReport  (id=6859)
> |blockPoolUsed|211096|
> |capacity|499082485760|
> |dfsUsed|211096|
> |failed|false|
> |remaining|10243256320|
> |storage|DatanodeStorage  (id=6872)|



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10763) Open files can leak permanently due to inconsistent lease update

2016-09-14 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HDFS-10763:
---
Fix Version/s: 2.6.5

Cherry-picked it to 2.6.5 (trivial).

> Open files can leak permanently due to inconsistent lease update
> 
>
> Key: HDFS-10763
> URL: https://issues.apache.org/jira/browse/HDFS-10763
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.3, 2.6.4
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Fix For: 2.6.5, 2.7.4, 3.0.0-alpha2
>
> Attachments: HDFS-10763.br27.patch, 
> HDFS-10763.branch-2.7.supplement.patch, HDFS-10763.branch-2.7.v2.patch, 
> HDFS-10763.patch
>
>
> This can heppen during {{commitBlockSynchronization()}} or a client gives up 
> on closing a file after retries.
> From {{finalizeINodeFileUnderConstruction()}}, the lease is removed first and 
> then the inode is turned into the closed state. But if any block is not in 
> COMPLETE state, 
> {{INodeFile#assertAllBlocksComplete()}} will throw an exception. This will 
> cause the lease is removed from the lease manager, but not from the inode. 
> Since the lease manager does not have a lease for the file, no lease recovery 
> will happen for this file. Moreover, this broken state is persisted and 
> reconstructed through saving and loading of fsimage. Since no replication is 
> scheduled for the blocks for the file, this can cause a data loss and also 
> block decommissioning of datanode.
> The lease cannot be manually recovered either. It fails with
> {noformat}
> ...AlreadyBeingCreatedException): Failed to RECOVER_LEASE /xyz/xyz for user1 
> on
>  0.0.0.1 because the file is under construction but no leases found.
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2950)
> ...
> {noformat}
> When a client retries {{close()}}, the same inconsistent state is created, 
> but it can work in the next time since {{checkLease()}} only looks at the 
> inode, not the lease manager in this case. The close behavior is different if 
> HDFS-8999 is activated by setting 
> {{dfs.namenode.file.close.num-committed-allowed}} to 1 (unlikely) or 2 
> (never). 
> In principle, the under-construction feature of an inode and the lease in the 
> lease manager should never go out of sync. The fix involves two parts.
> 1) Prevent inconsistent lease updates. We can achieve this by calling 
> {{removeLease()}} after checking the block state. 
> 2) Avoid reconstructing inconsistent lease states from a fsimage. 1) alone 
> does not correct the existing inconsistencies surviving through fsimages.  
> This can be done during fsimage loading time by making sure a corresponding 
> lease exists for each inode that are with the underconstruction feature. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9333) Some tests using MiniDFSCluster errored complaining port in use

2016-09-14 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491495#comment-15491495
 ] 

Andrew Wang commented on HDFS-9333:
---

I think even if we leave it as {{restartDataNodes(true)}}, just using 
{{ServerSocketUtil#getPort}} instead will reduce the frequency of port 
conflicts by getting us out of the ephemeral port range.

I'm happy to +1 any patch so we can give it a try.

> Some tests using MiniDFSCluster errored complaining port in use
> ---
>
> Key: HDFS-9333
> URL: https://issues.apache.org/jira/browse/HDFS-9333
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Kai Zheng
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HDFS-9333.001.patch, HDFS-9333.002.patch
>
>
> Ref. the following:
> {noformat}
> Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 30.483 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped
> testRead(org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped)
>   Time elapsed: 11.021 sec  <<< ERROR!
> java.net.BindException: Port in use: localhost:49333
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at 
> org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
>   at 
> org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:884)
>   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:826)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:821)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:675)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:883)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:862)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1555)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:2015)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1996)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS.doTestRead(TestBlockTokenWithDFS.java:539)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped.testRead(TestBlockTokenWithDFSStriped.java:62)
> {noformat}
> Another one:
> {noformat}
> Tests run: 5, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 9.859 sec <<< 
> FAILURE! - in org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
> testFailoverAndBackOnNNShutdown(org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController)
>   Time elapsed: 0.41 sec  <<< ERROR!
> java.net.BindException: Problem binding to [localhost:10021] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:469)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:695)
>   at org.apache.hadoop.ipc.Server.(Server.java:2464)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:399)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:742)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:680)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:883)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:862)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1555)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1245)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1014)
>   at 

[jira] [Updated] (HDFS-10637) Modifications to remove the assumption that FsVolumes are backed by java.io.File.

2016-09-14 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10637:
--
Status: Open  (was: Patch Available)

> Modifications to remove the assumption that FsVolumes are backed by 
> java.io.File.
> -
>
> Key: HDFS-10637
> URL: https://issues.apache.org/jira/browse/HDFS-10637
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-10637.001.patch, HDFS-10637.002.patch, 
> HDFS-10637.003.patch, HDFS-10637.004.patch, HDFS-10637.005.patch, 
> HDFS-10637.006.patch, HDFS-10637.007.patch
>
>
> Modifications to {{FsVolumeSpi}} and {{FsVolumeImpl}} to remove references to 
> {{java.io.File}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10475) Adding metrics for long FSNamesystem read and write locks

2016-09-14 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491427#comment-15491427
 ] 

Andrew Wang commented on HDFS-10475:


bq. how we would publish the histograms?

We already have histogram metrics, see MutableQuantiles. Basically we snapshot 
a few interesting quantiles over a window length and return that. MQ 
unfortunately isn't very efficient, which is why I brought up HdrHistogram.

> Adding metrics for long FSNamesystem read and write locks
> -
>
> Key: HDFS-10475
> URL: https://issues.apache.org/jira/browse/HDFS-10475
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Erik Krogen
>
> This is a follow up of the comment on HADOOP-12916 and 
> [here|https://issues.apache.org/jira/browse/HDFS-9924?focusedCommentId=15310837=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15310837]
>  add more metrics and WARN/DEBUG logs for long FSD/FSN locking operations on 
> namenode similar to what we have for slow write/network WARN/metrics on 
> datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity

2016-09-14 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10824:
-
Attachment: (was: HDFS-10824.002.patch)

> MiniDFSCluster#storageCapacities has no effects on real capacity
> 
>
> Key: HDFS-10824
> URL: https://issues.apache.org/jira/browse/HDFS-10824
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10824.000.patch, HDFS-10824.001.patch
>
>
> It has been noticed MiniDFSCluster#storageCapacities has no effects on real 
> capacity. It can be reproduced by explicitly setting storageCapacities and 
> then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to 
> compare results. The following are  storage report for one node with two 
> volumes after I set capacity as 300 * 1024. Apparently, the capacity is not 
> changed.
> adminState|DatanodeInfo$AdminStates  (id=6861)
> |blockPoolUsed|215192|
> |cacheCapacity|0|
> |cacheUsed|0|
> |capacity|998164971520|
> |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)|
> |dependentHostNames|LinkedList  (id=6863)|
> |dfsUsed|215192|
> |hostName|"127.0.0.1" (id=6864)|
> |infoPort|64222|
> |infoSecurePort|0|
> |ipAddr|"127.0.0.1" (id=6865)|
> |ipcPort|64223|
> |lastUpdate|1472682790948|
> |lastUpdateMonotonic|209605640|
> |level|0|
> |location|"/default-rack" (id=6866)|
> |maintenanceExpireTimeInMS|0|
> |parent|null|
> |peerHostName|null|
> |remaining|20486512640|
> |softwareVersion|null|
> |upgradeDomain|null|
> |xceiverCount|1|
> |xferAddr|"127.0.0.1:64220" (id=6855)|
> |xferPort|64220|
> [0]StorageReport  (id=6856)
> |blockPoolUsed|4096|
> |capacity|499082485760|
> |dfsUsed|4096|
> |failed|false|
> |remaining|10243256320|
> |storage|DatanodeStorage  (id=6869)|
> [1]StorageReport  (id=6859)
> |blockPoolUsed|211096|
> |capacity|499082485760|
> |dfsUsed|211096|
> |failed|false|
> |remaining|10243256320|
> |storage|DatanodeStorage  (id=6872)|



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10824) MiniDFSCluster#storageCapacities has no effects on real capacity

2016-09-14 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10824:
-
Attachment: HDFS-10824.002.patch

> MiniDFSCluster#storageCapacities has no effects on real capacity
> 
>
> Key: HDFS-10824
> URL: https://issues.apache.org/jira/browse/HDFS-10824
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10824.000.patch, HDFS-10824.001.patch, 
> HDFS-10824.002.patch
>
>
> It has been noticed MiniDFSCluster#storageCapacities has no effects on real 
> capacity. It can be reproduced by explicitly setting storageCapacities and 
> then call ClientProtocol#getDatanodeStorageReport(DatanodeReportType.LIVE) to 
> compare results. The following are  storage report for one node with two 
> volumes after I set capacity as 300 * 1024. Apparently, the capacity is not 
> changed.
> adminState|DatanodeInfo$AdminStates  (id=6861)
> |blockPoolUsed|215192|
> |cacheCapacity|0|
> |cacheUsed|0|
> |capacity|998164971520|
> |datanodeUuid|"839912e9-5bcb-45d1-81cf-9a9c9c02a00b" (id=6862)|
> |dependentHostNames|LinkedList  (id=6863)|
> |dfsUsed|215192|
> |hostName|"127.0.0.1" (id=6864)|
> |infoPort|64222|
> |infoSecurePort|0|
> |ipAddr|"127.0.0.1" (id=6865)|
> |ipcPort|64223|
> |lastUpdate|1472682790948|
> |lastUpdateMonotonic|209605640|
> |level|0|
> |location|"/default-rack" (id=6866)|
> |maintenanceExpireTimeInMS|0|
> |parent|null|
> |peerHostName|null|
> |remaining|20486512640|
> |softwareVersion|null|
> |upgradeDomain|null|
> |xceiverCount|1|
> |xferAddr|"127.0.0.1:64220" (id=6855)|
> |xferPort|64220|
> [0]StorageReport  (id=6856)
> |blockPoolUsed|4096|
> |capacity|499082485760|
> |dfsUsed|4096|
> |failed|false|
> |remaining|10243256320|
> |storage|DatanodeStorage  (id=6869)|
> [1]StorageReport  (id=6859)
> |blockPoolUsed|211096|
> |capacity|499082485760|
> |dfsUsed|211096|
> |failed|false|
> |remaining|10243256320|
> |storage|DatanodeStorage  (id=6872)|



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10637) Modifications to remove the assumption that FsVolumes are backed by java.io.File.

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491383#comment-15491383
 ] 

Hadoop QA commented on HDFS-10637:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 10 new + 999 unchanged - 11 fixed = 1009 total (was 1010) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
51s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
18s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  FsVolumeImpl is incompatible with expected argument type String in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.addVolume(FsVolumeImpl)
  At FsDatasetAsyncDiskService.java:argument type String in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.addVolume(FsVolumeImpl)
  At FsDatasetAsyncDiskService.java:[line 129] |
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10637 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828509/HDFS-10637.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0ece8f984bfe 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2a8f55a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16741/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | 

[jira] [Commented] (HDFS-10475) Adding metrics for long FSNamesystem read and write locks

2016-09-14 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491251#comment-15491251
 ] 

Erik Krogen commented on HDFS-10475:


Thanks for the pointer to RpcDetailedActivity, [~andrew.wang]. Definitely very 
helpful! 

I also looked at HdrHistogram. The information that it could provide seems very 
informative but I wonder how we would publish the histograms? Such information 
does not seem to fit into the existing metrics publication framework. If you 
had ideas about this let me know.  

The reason we are interested in pursuing this at the lock level rather than the 
RPC level is that RPC time includes e.g. the time that an operation spent 
waiting in the lock queue, so if an operation has a long RPC time it is not 
clear whether that is due to getting blocked behind other long operations or if 
it is due to slowness within the operation itself. It would be useful to be 
able to drill down to find specific culprit operations that are spending a lock 
of time holding the lock. 

[~kihwal], for your first two examples, even if the frequency was low this 
would still show as a spike in the metrics, right? Combined with the long-held 
lock logging from HDFS-10817 and HDFS-9145 it seems these cases should be 
pretty well covered. Doing lock-level metrics would enable us to capture the 
last examples you discussed which cannot be captured by the current RPC-level 
metrics. 

The question about {{getContentSummary}} is interesting, but this is a special 
case, right? It would be prudent to keep in mind when looking at the metrics 
for {{getContentSummary}} that the "number of ops" may be an overestimate since 
each lock period would be counted as an op, but the overall time spent locking 
for {{getContentSummary}} would still be accurately logged, which would still 
help to provide an idea of which operations are expensive in terms of locking. 

> Adding metrics for long FSNamesystem read and write locks
> -
>
> Key: HDFS-10475
> URL: https://issues.apache.org/jira/browse/HDFS-10475
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Erik Krogen
>
> This is a follow up of the comment on HADOOP-12916 and 
> [here|https://issues.apache.org/jira/browse/HDFS-9924?focusedCommentId=15310837=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15310837]
>  add more metrics and WARN/DEBUG logs for long FSD/FSN locking operations on 
> namenode similar to what we have for slow write/network WARN/metrics on 
> datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10637) Modifications to remove the assumption that FsVolumes are backed by java.io.File.

2016-09-14 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10637:
--
Status: Patch Available  (was: Open)

> Modifications to remove the assumption that FsVolumes are backed by 
> java.io.File.
> -
>
> Key: HDFS-10637
> URL: https://issues.apache.org/jira/browse/HDFS-10637
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-10637.001.patch, HDFS-10637.002.patch, 
> HDFS-10637.003.patch, HDFS-10637.004.patch, HDFS-10637.005.patch, 
> HDFS-10637.006.patch, HDFS-10637.007.patch
>
>
> Modifications to {{FsVolumeSpi}} and {{FsVolumeImpl}} to remove references to 
> {{java.io.File}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10637) Modifications to remove the assumption that FsVolumes are backed by java.io.File.

2016-09-14 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-10637:
--
Attachment: HDFS-10637.007.patch

Updated patch to work with the most recent patch of HDFS-10636

> Modifications to remove the assumption that FsVolumes are backed by 
> java.io.File.
> -
>
> Key: HDFS-10637
> URL: https://issues.apache.org/jira/browse/HDFS-10637
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, fs
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-10637.001.patch, HDFS-10637.002.patch, 
> HDFS-10637.003.patch, HDFS-10637.004.patch, HDFS-10637.005.patch, 
> HDFS-10637.006.patch, HDFS-10637.007.patch
>
>
> Modifications to {{FsVolumeSpi}} and {{FsVolumeImpl}} to remove references to 
> {{java.io.File}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10301) BlockReport retransmissions may lead to storages falsely being declared zombie if storage report processing happens out of order

2016-09-14 Thread Vinitha Reddy Gankidi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491142#comment-15491142
 ] 

Vinitha Reddy Gankidi commented on HDFS-10301:
--

[~arpiagariu]  Storage reports are anyway sent in heartbeats and these reports 
have the information required to prune zombie storages. These storages are only 
marked as FAILED in the heartbeat. The replicas are removed in background by 
the HeartbeatManager. Why exactly do you think zombie removal in heartbeats is 
not safe? Why do we need to wait for all storage block reports from a FBR?

> BlockReport retransmissions may lead to storages falsely being declared 
> zombie if storage report processing happens out of order
> 
>
> Key: HDFS-10301
> URL: https://issues.apache.org/jira/browse/HDFS-10301
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.1
>Reporter: Konstantin Shvachko
>Assignee: Vinitha Reddy Gankidi
>Priority: Critical
> Attachments: HDFS-10301.002.patch, HDFS-10301.003.patch, 
> HDFS-10301.004.patch, HDFS-10301.005.patch, HDFS-10301.006.patch, 
> HDFS-10301.007.patch, HDFS-10301.008.patch, HDFS-10301.009.patch, 
> HDFS-10301.01.patch, HDFS-10301.010.patch, HDFS-10301.011.patch, 
> HDFS-10301.012.patch, HDFS-10301.013.patch, HDFS-10301.014.patch, 
> HDFS-10301.branch-2.7.patch, HDFS-10301.branch-2.patch, 
> HDFS-10301.sample.patch, zombieStorageLogs.rtf
>
>
> When NameNode is busy a DataNode can timeout sending a block report. Then it 
> sends the block report again. Then NameNode while process these two reports 
> at the same time can interleave processing storages from different reports. 
> This screws up the blockReportId field, which makes NameNode think that some 
> storages are zombie. Replicas from zombie storages are immediately removed, 
> causing missing blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10843) Quota Feature Cached Size != Computed Size When Block Committed But Not Completed

2016-09-14 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491124#comment-15491124
 ] 

Konstantin Shvachko commented on HDFS-10843:


You are right, the control flow looks awkward once you see it done.
Let's leave {{BlockInfo.convertToCompleteBlock()}} as is and instead wrap the 
calls to these 2 methods in a new function 
{{BlockManager.convertToCompleteBlock(curBlock, iip)}} adding all the comments 
about updating space while completing there.
Would that look better?

> Quota Feature Cached Size != Computed Size When Block Committed But Not 
> Completed
> -
>
> Key: HDFS-10843
> URL: https://issues.apache.org/jira/browse/HDFS-10843
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Affects Versions: 2.6.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
> Attachments: HDFS-10843.000.patch, HDFS-10843.001.patch, 
> HDFS-10843.002.patch, HDFS-10843.003.patch, HDFS-10843.004.patch
>
>
> Currently when a block has been committed but has not yet been completed, the 
> cached size (used for the quota feature) of the directory containing that 
> block differs from the computed size. This results in log messages of the 
> following form:
> bq. ERROR namenode.NameNode 
> (DirectoryWithQuotaFeature.java:checkStoragespace(141)) - BUG: Inconsistent 
> storagespace for directory /TestQuotaUpdate. Cached = 512 != Computed = 8192
> When a block is initially started under construction, the used space is 
> conservatively set to a full block. When the block is committed, the cached 
> size is updated to the final size of the block. However, the calculation of 
> the computed size uses the full block size until the block is completed, so 
> in the period where the block is committed but not completed they disagree. 
> To fix this we need to decide which is correct and fix the other to match. It 
> seems to me that the cached size is correct since once the block is committed 
> its size will not change. 
> This can be reproduced using the following steps:
> - Create a directory with a quota
> - Start writing to a file within this directory
> - Prevent all datanodes to which the file is written from communicating the 
> corresponding BlockReceivedAndDeletedRequestProto to the NN temporarily (i.e. 
> simulate a transient network partition/delay)
> - During this time, call DistributedFileSystem.getContentSummary() on the 
> directory with the quota



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10810) Setreplication removing block from underconstrcution temporarily when batch IBR is enabled.

2016-09-14 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491119#comment-15491119
 ] 

Mingliang Liu commented on HDFS-10810:
--

{quote}
Brahma Reddy Battula: we can remove v3 patch from jira..?
{quote}
I posted the v3 patch to help me understand the root cause of the problem. 
Sorry for the confusion. I deleted it now. Let's focus on the original v2 patch.

{quote}
Tsz Wo Nicholas Sze: Why v3 does not work?
{quote}
One change I made in deleted v3 patch was the number of DNs vs. replication 
factor. In the v2 test there are 3 DNs while the replication factor is set 10 
from 3. I updated the v3 patch for this, and it works as what v2 expects. That 
is, the test fails without the patch, while it passes with the patch. Just for 
the record, I pasted as following instead of uploading a new patch.
{code}
  @Test(timeout = 6)
  public void testSetReplicationWhenBatchIBR() throws Exception {
final Configuration conf = new HdfsConfiguration();
conf.setLong(DFSConfigKeys.DFS_BLOCKREPORT_INCREMENTAL_INTERVAL_MSEC_KEY,
3);
conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 1024);
conf.setInt(DFSConfigKeys.DFS_NAMENODE_FILE_CLOSE_NUM_COMMITTED_ALLOWED_KEY,
1);
try (MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf)
.numDataNodes(3).build()) {
  cluster.waitActive();
  final DistributedFileSystem dfs = cluster.getFileSystem();
  DFSTestUtil.createFile(dfs, new Path("/testSetReplicationWhenBatchIBR-1"),
  1024L, (short) 3, 0L);
  //sending the FBR to Delay next IBR
  cluster.triggerBlockReports();
  Thread.sleep(3000);
  final Path filePath = new Path("/testSetReplicationWhenBatchIBR-2");
  DFSTestUtil.createFile(dfs, filePath, 1024L, (short) 3, 0L);
  dfs.setReplication(filePath, (short) 10);
  Thread.sleep(3000);
  assertEquals(0,
  cluster.getNamesystem().getBlockManager().getMissingBlocksCount());
  assertEquals(1, cluster.getNamesystem().getBlockManager()
  .getUnderReplicatedBlocksCount());
}
  }
{code}

{quote}
Tsz Wo Nicholas Sze: I believe you have found a real bug. I just want to 
understand more the details.
{quote}
Me too. Again, the basic idea of taking pending replicas into account when 
updating the {{neededReconstructions}} makes sense to me.

>  Setreplication removing block from underconstrcution temporarily when batch 
> IBR is enabled.
> 
>
> Key: HDFS-10810
> URL: https://issues.apache.org/jira/browse/HDFS-10810
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10810-002.patch, HDFS-10810.patch
>
>
> 1)Batch IBR is enabled with number of committed blocks allowed=1
> 2) Written one block and closed the file without waiting for IBR
> 3)Setreplication called immediately on the file. 
> So till the finalized IBR Received, this block will be marked as corrupt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10810) Setreplication removing block from underconstrcution temporarily when batch IBR is enabled.

2016-09-14 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10810:
-
Attachment: (was: HDFS-10810-003.patch)

>  Setreplication removing block from underconstrcution temporarily when batch 
> IBR is enabled.
> 
>
> Key: HDFS-10810
> URL: https://issues.apache.org/jira/browse/HDFS-10810
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10810-002.patch, HDFS-10810.patch
>
>
> 1)Batch IBR is enabled with number of committed blocks allowed=1
> 2) Written one block and closed the file without waiting for IBR
> 3)Setreplication called immediately on the file. 
> So till the finalized IBR Received, this block will be marked as corrupt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10797) Disk usage summary of snapshots causes renamed blocks to get counted twice

2016-09-14 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HDFS-10797:
-
Attachment: HDFS-10797.001.patch

Attaching a patch that tries to identify files where the underlying inodes 
appear in both the DELETED and CREATED portion of the snapshot's diff, and does 
not count them toward the snapshot's space like it would a simply deleted file. 
Also added a test case that runs through scenarios like a chain of multiple 
renames, renaming a file and replacing the original file, and appends (even 
though they turned out to not have anything to do with the actual bug).

> Disk usage summary of snapshots causes renamed blocks to get counted twice
> --
>
> Key: HDFS-10797
> URL: https://issues.apache.org/jira/browse/HDFS-10797
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Sean Mackrory
> Attachments: HDFS-10797.001.patch
>
>
> DirectoryWithSnapshotFeature.computeContentSummary4Snapshot calculates how 
> much disk usage is used by a snapshot by tallying up the files in the 
> snapshot that have since been deleted (that way it won't overlap with regular 
> files whose disk usage is computed separately). However that is determined 
> from a diff that shows moved (to Trash or otherwise) or renamed files as a 
> deletion and a creation operation that may overlap with the list of blocks. 
> Only the deletion operation is taken into consideration, and this causes 
> those blocks to get represented twice in the disk usage tallying.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10810) Setreplication removing block from underconstrcution temporarily when batch IBR is enabled.

2016-09-14 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490974#comment-15490974
 ] 

Tsz Wo Nicholas Sze commented on HDFS-10810:


[~brahmareddy], I believe you have found a real bug.  I just want to understand 
more the details.

>  Setreplication removing block from underconstrcution temporarily when batch 
> IBR is enabled.
> 
>
> Key: HDFS-10810
> URL: https://issues.apache.org/jira/browse/HDFS-10810
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10810-002.patch, HDFS-10810-003.patch, 
> HDFS-10810.patch
>
>
> 1)Batch IBR is enabled with number of committed blocks allowed=1
> 2) Written one block and closed the file without waiting for IBR
> 3)Setreplication called immediately on the file. 
> So till the finalized IBR Received, this block will be marked as corrupt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10810) Setreplication removing block from underconstrcution temporarily when batch IBR is enabled.

2016-09-14 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490968#comment-15490968
 ] 

Tsz Wo Nicholas Sze commented on HDFS-10810:


The  v2 and v3 patches are the same except for testSetReplicationWhenBatchIBR.  
Why v3 does not work?

>  Setreplication removing block from underconstrcution temporarily when batch 
> IBR is enabled.
> 
>
> Key: HDFS-10810
> URL: https://issues.apache.org/jira/browse/HDFS-10810
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10810-002.patch, HDFS-10810-003.patch, 
> HDFS-10810.patch
>
>
> 1)Batch IBR is enabled with number of committed blocks allowed=1
> 2) Written one block and closed the file without waiting for IBR
> 3)Setreplication called immediately on the file. 
> So till the finalized IBR Received, this block will be marked as corrupt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10805) Reduce runtime for append test

2016-09-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490923#comment-15490923
 ] 

Hudson commented on HDFS-10805:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10442 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10442/])
HDFS-10805. Reduce runtime for append test. Contributed by Gergely (arp: rev 
2a8f55a0cf147b567d70c0c07f84cf27fd413c19)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/AppendTestUtil.java


> Reduce runtime for append test
> --
>
> Key: HDFS-10805
> URL: https://issues.apache.org/jira/browse/HDFS-10805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Gergely Novák
>Assignee: Gergely Novák
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10805.001.patch
>
>
> {{testAppend}} takes by far the most time of test suite 
> {{TestHDFSFileSystemContract}}, more than 1 min 45 sec (while all the other 
> tests run under 3 seconds). In this test we perform 500 appends, which takes 
> a lot of time. I suggest to reduce the number of appends as it won't change 
> the test's strength, only its runtime. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10805) Reduce runtime for append test

2016-09-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10805:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed through 2.8.0. Thanks for the contribution [~GergelyNovak]!

> Reduce runtime for append test
> --
>
> Key: HDFS-10805
> URL: https://issues.apache.org/jira/browse/HDFS-10805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Gergely Novák
>Assignee: Gergely Novák
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HDFS-10805.001.patch
>
>
> {{testAppend}} takes by far the most time of test suite 
> {{TestHDFSFileSystemContract}}, more than 1 min 45 sec (while all the other 
> tests run under 3 seconds). In this test we perform 500 appends, which takes 
> a lot of time. I suggest to reduce the number of appends as it won't change 
> the test's strength, only its runtime. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10805) Reduce runtime for append test

2016-09-14 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490869#comment-15490869
 ] 

Arpit Agarwal commented on HDFS-10805:
--

+1 for the patch. The unit test failures are unrelated.

> Reduce runtime for append test
> --
>
> Key: HDFS-10805
> URL: https://issues.apache.org/jira/browse/HDFS-10805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Gergely Novák
>Assignee: Gergely Novák
>Priority: Minor
> Attachments: HDFS-10805.001.patch
>
>
> {{testAppend}} takes by far the most time of test suite 
> {{TestHDFSFileSystemContract}}, more than 1 min 45 sec (while all the other 
> tests run under 3 seconds). In this test we perform 500 appends, which takes 
> a lot of time. I suggest to reduce the number of appends as it won't change 
> the test's strength, only its runtime. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10805) Reduce runtime for append test

2016-09-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10805:
-
Assignee: Gergely Novák

> Reduce runtime for append test
> --
>
> Key: HDFS-10805
> URL: https://issues.apache.org/jira/browse/HDFS-10805
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Gergely Novák
>Assignee: Gergely Novák
>Priority: Minor
> Attachments: HDFS-10805.001.patch
>
>
> {{testAppend}} takes by far the most time of test suite 
> {{TestHDFSFileSystemContract}}, more than 1 min 45 sec (while all the other 
> tests run under 3 seconds). In this test we perform 500 appends, which takes 
> a lot of time. I suggest to reduce the number of appends as it won't change 
> the test's strength, only its runtime. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10810) Setreplication removing block from underconstrcution temporarily when batch IBR is enabled.

2016-09-14 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490765#comment-15490765
 ] 

Brahma Reddy Battula commented on HDFS-10810:
-

Can you please look at v2 patch..v3 is uploaded by [~liuml07] and he commented 
[here|https://issues.apache.org/jira/browse/HDFS-10810?focusedCommentId=15489196=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15489196]..[~liuml07]
 I think, we can remove v3 patch from jira..?

>  Setreplication removing block from underconstrcution temporarily when batch 
> IBR is enabled.
> 
>
> Key: HDFS-10810
> URL: https://issues.apache.org/jira/browse/HDFS-10810
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10810-002.patch, HDFS-10810-003.patch, 
> HDFS-10810.patch
>
>
> 1)Batch IBR is enabled with number of committed blocks allowed=1
> 2) Written one block and closed the file without waiting for IBR
> 3)Setreplication called immediately on the file. 
> So till the finalized IBR Received, this block will be marked as corrupt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10690) Optimize insertion/removal of replica in ShortCircuitCache.java

2016-09-14 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490754#comment-15490754
 ] 

Xiaoyu Yao commented on HDFS-10690:
---

Thanks [~fenghua_hu] for providing the new numbers. Considering the perf 
difference and what we have done before for similar TreeMap performance issue 
in HDFS-7433, HDFS-8793, I would prefer a LinkedMap (or HashMap) based solution 
with lower risk of regression. 

> Optimize insertion/removal of replica in ShortCircuitCache.java
> ---
>
> Key: HDFS-10690
> URL: https://issues.apache.org/jira/browse/HDFS-10690
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha2
>Reporter: Fenghua Hu
>Assignee: Fenghua Hu
> Attachments: HDFS-10690.001.patch, HDFS-10690.002.patch, 
> ShortCircuitCache_LinkedMap.patch
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Currently in ShortCircuitCache, two TreeMap objects are used to track the 
> cached replicas.
> private final TreeMap evictable = new TreeMap<>();
> private final TreeMap evictableMmapped = new 
> TreeMap<>();
> TreeMap employs Red-Black tree for sorting. This isn't an issue when using 
> traditional HDD. But when using high-performance SSD/PCIe Flash, the cost 
> inserting/removing an entry  becomes considerable.
> To mitigate it, we designed a new list-based for replica tracking.
> The list is a double-linked FIFO. FIFO is time-based, thus insertion is a 
> very low cost operation. On the other hand, list is not lookup-friendly. To 
> address this issue, we introduce two references into ShortCircuitReplica 
> object.
> ShortCircuitReplica next = null;
> ShortCircuitReplica prev = null;
> In this way, lookup is not needed when removing a replica from the list. We 
> only need to modify its predecessor's and successor's references in the lists.
> Our tests showed up to 15-50% performance improvement when using PCIe flash 
> as storage media.
> The original patch is against 2.6.4, now I am porting to Hadoop trunk, and 
> patch will be posted soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10810) Setreplication removing block from underconstrcution temporarily when batch IBR is enabled.

2016-09-14 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490646#comment-15490646
 ] 

Tsz Wo Nicholas Sze commented on HDFS-10810:


Thanks for the explanation.  However, the new test still failed the same way 
even with the fix.
{code}
Running org.apache.hadoop.hdfs.TestFileCorruption
Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 24.143 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.TestFileCorruption
testSetReplicationWhenBatchIBR(org.apache.hadoop.hdfs.TestFileCorruption)  Time 
elapsed: 8.049 sec  <<< FAILURE!
java.lang.AssertionError: expected:<1> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.TestFileCorruption.testSetReplicationWhenBatchIBR(TestFileCorruption.java:257)
{code}
I applied HDFS-10810-003.patch without any modification and ran the test a few 
times.  It failed consistently.

>  Setreplication removing block from underconstrcution temporarily when batch 
> IBR is enabled.
> 
>
> Key: HDFS-10810
> URL: https://issues.apache.org/jira/browse/HDFS-10810
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-10810-002.patch, HDFS-10810-003.patch, 
> HDFS-10810.patch
>
>
> 1)Batch IBR is enabled with number of committed blocks allowed=1
> 2) Written one block and closed the file without waiting for IBR
> 3)Setreplication called immediately on the file. 
> So till the finalized IBR Received, this block will be marked as corrupt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9668) Optimize the locking in FsDatasetImpl

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490641#comment-15490641
 ] 

Hadoop QA commented on HDFS-9668:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 34s{color} | {color:orange} root: The patch generated 65 new + 183 unchanged 
- 12 fixed = 248 total (was 195) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
40s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 77m 
56s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-9668 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828452/HDFS-9668-9.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1e0ce2729626 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ea0c2b8 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16740/artifact/patchprocess/diff-checkstyle-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16740/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16740/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Commented] (HDFS-10475) Adding metrics for long FSNamesystem read and write locks

2016-09-14 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490522#comment-15490522
 ] 

Kihwal Lee commented on HDFS-10475:
---

While I think this is a very useful feature, I am afraid it might not capture 
some of the major culprits. Lately we are seeing long FSN locks on 2.7. 
Examples are:
- Adding/removing nodes with existing blocks. This might show up as 
{{registerDatanode}}.
- Recommissioning nodes. Might show up as {{refreshNodes}}. If you have many 
live nodes getting recommissioned, it can take minutes depending on how many 
blocks they have.
- Decommission manager scans. It can be tuned better thanks to [~andrew.wang]'s 
rewrite, but still needs a bit of improvement. (non-rpc)
- Replication monitor can also hold the FSN lock for a very long time (e.g. 2-4 
seconds) in some cases. (non-rpc)
- These are fixed/alleviated: Postponed-misreplicated block scan (non-rpc), 
replication queues iniitialization (non-rpc).

Note that the first two are done as RPC, but their frequency is very low. 
Depending on how we keep track and present the numbers, they may get washed out 
in the long run making them seem less impactful. When these are not happening, 
the NN throughput and response times are pretty good.

How will ops like {{getContentSummary()}} be billed? It can yield the lock 
multiple times while it's running.

> Adding metrics for long FSNamesystem read and write locks
> -
>
> Key: HDFS-10475
> URL: https://issues.apache.org/jira/browse/HDFS-10475
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Xiaoyu Yao
>Assignee: Erik Krogen
>
> This is a follow up of the comment on HADOOP-12916 and 
> [here|https://issues.apache.org/jira/browse/HDFS-9924?focusedCommentId=15310837=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15310837]
>  add more metrics and WARN/DEBUG logs for long FSD/FSN locking operations on 
> namenode similar to what we have for slow write/network WARN/metrics on 
> datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9668) Optimize the locking in FsDatasetImpl

2016-09-14 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HDFS-9668:
---
Attachment: HDFS-9668-9.patch

Upload a new patch V9 to fix the whitespace warning.

> Optimize the locking in FsDatasetImpl
> -
>
> Key: HDFS-9668
> URL: https://issues.apache.org/jira/browse/HDFS-9668
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HDFS-9668-1.patch, HDFS-9668-2.patch, HDFS-9668-3.patch, 
> HDFS-9668-4.patch, HDFS-9668-5.patch, HDFS-9668-6.patch, HDFS-9668-7.patch, 
> HDFS-9668-8.patch, HDFS-9668-9.patch, execution_time.png
>
>
> During the HBase test on a tiered storage of HDFS (WAL is stored in 
> SSD/RAMDISK, and all other files are stored in HDD), we observe many 
> long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part 
> of the jstack result:
> {noformat}
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48521 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779272_40852]" - Thread 
> t@93336
>java.lang.Thread.State: BLOCKED
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:)
>   - waiting to lock <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) owned by 
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" t@93335
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - None
>   
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" - Thread 
> t@93335
>java.lang.Thread.State: RUNNABLE
>   at java.io.UnixFileSystem.createFileExclusively(Native Method)
>   at java.io.File.createNewFile(File.java:1012)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:66)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:271)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:286)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1140)
>   - locked <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - None
> {noformat}
> We measured the execution of some operations in FsDatasetImpl during the 
> test. Here following is the result.
> !execution_time.png!
> The operations of finalizeBlock, addBlock and createRbw on HDD in a heavy 
> load take a really long time.
> It means one slow operation of finalizeBlock, addBlock and createRbw in a 
> slow storage can block all the other same operations in the same DataNode, 
> especially in HBase when many wal/flusher/compactor are configured.
> We need a finer grained lock mechanism in a new FsDatasetImpl implementation 
> and users can choose the implementation by configuring 
> "dfs.datanode.fsdataset.factory" in DataNode.
> We can implement the lock by either storage level or block-level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: 

[jira] [Commented] (HDFS-9668) Optimize the locking in FsDatasetImpl

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490287#comment-15490287
 ] 

Hadoop QA commented on HDFS-9668:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 30s{color} | {color:orange} root: The patch generated 65 new + 183 unchanged 
- 12 fixed = 248 total (was 195) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
44s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 21s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestPersistBlocks |
|   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-9668 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828426/HDFS-9668-8.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ce5ec0ac3a46 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ea0c2b8 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16739/artifact/patchprocess/diff-checkstyle-root.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16739/artifact/patchprocess/whitespace-eol.txt
 |
| unit | 

[jira] [Created] (HDFS-10863) hadoop superusergroup supergroup issue

2016-09-14 Thread www.jbigdata.fr (JIRA)
www.jbigdata.fr created HDFS-10863:
--

 Summary: hadoop superusergroup supergroup issue
 Key: HDFS-10863
 URL: https://issues.apache.org/jira/browse/HDFS-10863
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.2
 Environment: $ hadoop version
Hadoop 2.7.2
Reporter: www.jbigdata.fr
Priority: Minor


I want to match my unix user to HDFS: hduser:hadoop.

For the user I use the VE.

$ echo $HADOOP_HDFS_USER
hduser

For the group I use the hdfs-site.xml :


dfs.permissions.superusergroup
hadoop


The namenode log file show the parameter user/group values.

INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = 
hduser (auth:SIMPLE)
INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup  = 
hadoop
INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = 
true

Everything seems to be OK, but when I copy file form FS to HDFS the group is 
not correct. It keeps the supergroup default value.

Thoses shell commands show the issue:

$ ll /srv/downloads/zk.tar
-rw-r--r-- 1 hduser hadoop 41984000 Aug 18 13:25 /srv/downloads/zk.tar
$ hdfs dfs -put /srv/downloads/zk.tar /tmp
$ hdfs dfs -ls /tmp/zk.tar
-rw-r--r--   2 hduser supergroup   41984000 2016-09-14 12:47 /tmp/zk.tar

I have:

-rw-r--r-- 2 hduser supergroup 41984000 2016-09-14 12:47 /tmp/zk.tar

I expect :

-rw-r--r-- 2 hduser hadoop 41984000 2016-09-14 12:47 /tmp/zk.tar

Why the HDFS group is not the value of the dfs.permissions.superusergroup 
property ?

@jbigdata.fr



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10777) DataNode should report volume failures if DU cannot access files

2016-09-14 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490031#comment-15490031
 ] 

Akira Ajisaka commented on HDFS-10777:
--

Therefore just logging or increment a metric is fine.

> DataNode should report volume failures if DU cannot access files
> ---
>
> Key: HDFS-10777
> URL: https://issues.apache.org/jira/browse/HDFS-10777
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10777.01.patch
>
>
> HADOOP-12973 refactored DU and makes it pluggable. The refactory has a 
> side-effect that if DU encounters an exception, the exception is caught, 
> logged and ignored, essentially fixes HDFS-9908 (in which case runaway 
> exceptions prevent DataNodes from handshaking with NameNodes).
> However, this "fix" is not good, in the sense that if the disk is bad, there 
> is no immediate action made by the DataNode other than logging the exception. 
> Existing {{FsDatasetSpi#checkDataDir}} has been reduced to only check a few 
> number of directories blindly. If a disk goes bad, it is often possible that 
> only a few files are bad initially and that by checking only a small number 
> of directories it is easy to overlook the degraded disk.
> I propose: in addition to logging the exception, DataNode should proactively 
> verify the files are not accessible, remove the volume, and make the failure 
> visible by showing it in JMX, so that administrators can spot the failure via 
> monitoring systems.
> A different fix, based on HDFS-9908, is needed before Hadoop 2.8.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10777) DataNode should report volume failures if DU cannot access files

2016-09-14 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490027#comment-15490027
 ] 

Akira Ajisaka commented on HDFS-10777:
--

bq. Disks can come back. That's especially if the disk flipped in some offline 
state after its controller was being hit too hard by IO requests; that's not 
unusual in Linux under heavy load (at least in the past)...ops can remount the 
disk and all will recover again.
Yes. In addition, even if a disk is good, DU rarely fails by race condition 
when a file in the target directory is removed/deleted (HDFS-8858).

> DataNode should report volume failures if DU cannot access files
> ---
>
> Key: HDFS-10777
> URL: https://issues.apache.org/jira/browse/HDFS-10777
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10777.01.patch
>
>
> HADOOP-12973 refactored DU and makes it pluggable. The refactory has a 
> side-effect that if DU encounters an exception, the exception is caught, 
> logged and ignored, essentially fixes HDFS-9908 (in which case runaway 
> exceptions prevent DataNodes from handshaking with NameNodes).
> However, this "fix" is not good, in the sense that if the disk is bad, there 
> is no immediate action made by the DataNode other than logging the exception. 
> Existing {{FsDatasetSpi#checkDataDir}} has been reduced to only check a few 
> number of directories blindly. If a disk goes bad, it is often possible that 
> only a few files are bad initially and that by checking only a small number 
> of directories it is easy to overlook the degraded disk.
> I propose: in addition to logging the exception, DataNode should proactively 
> verify the files are not accessible, remove the volume, and make the failure 
> visible by showing it in JMX, so that administrators can spot the failure via 
> monitoring systems.
> A different fix, based on HDFS-9908, is needed before Hadoop 2.8.0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9668) Optimize the locking in FsDatasetImpl

2016-09-14 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HDFS-9668:
---
Attachment: HDFS-9668-8.patch

Rebase the code for the updates in HDFS-10636 and upload a new patch V8, and 
fix the issues in the unit tests.

> Optimize the locking in FsDatasetImpl
> -
>
> Key: HDFS-9668
> URL: https://issues.apache.org/jira/browse/HDFS-9668
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HDFS-9668-1.patch, HDFS-9668-2.patch, HDFS-9668-3.patch, 
> HDFS-9668-4.patch, HDFS-9668-5.patch, HDFS-9668-6.patch, HDFS-9668-7.patch, 
> HDFS-9668-8.patch, execution_time.png
>
>
> During the HBase test on a tiered storage of HDFS (WAL is stored in 
> SSD/RAMDISK, and all other files are stored in HDD), we observe many 
> long-time BLOCKED threads on FsDatasetImpl in DataNode. The following is part 
> of the jstack result:
> {noformat}
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48521 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779272_40852]" - Thread 
> t@93336
>java.lang.Thread.State: BLOCKED
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:)
>   - waiting to lock <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl) owned by 
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" t@93335
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - None
>   
> "DataXceiver for client DFSClient_NONMAPREDUCE_-1626037897_1 at 
> /192.168.50.16:48520 [Receiving block 
> BP-1042877462-192.168.50.13-1446173170517:blk_1073779271_40851]" - Thread 
> t@93335
>java.lang.Thread.State: RUNNABLE
>   at java.io.UnixFileSystem.createFileExclusively(Native Method)
>   at java.io.File.createNewFile(File.java:1012)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createTmpFile(DatanodeUtil.java:66)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createRbwFile(BlockPoolSlice.java:271)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createRbwFile(FsVolumeImpl.java:286)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:1140)
>   - locked <18324c9> (a 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
>   at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createRbw(FsDatasetImpl.java:113)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:183)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:615)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - None
> {noformat}
> We measured the execution of some operations in FsDatasetImpl during the 
> test. Here following is the result.
> !execution_time.png!
> The operations of finalizeBlock, addBlock and createRbw on HDD in a heavy 
> load take a really long time.
> It means one slow operation of finalizeBlock, addBlock and createRbw in a 
> slow storage can block all the other same operations in the same DataNode, 
> especially in HBase when many wal/flusher/compactor are configured.
> We need a finer grained lock mechanism in a new FsDatasetImpl implementation 
> and users can choose the implementation by configuring 
> "dfs.datanode.fsdataset.factory" in DataNode.
> We can implement the lock by either storage level or block-level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-

[jira] [Commented] (HDFS-8858) DU should be re-executed if the target directory exists

2016-09-14 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15489879#comment-15489879
 ] 

Akira Ajisaka commented on HDFS-8858:
-

Unintentionally fixed by HADOOP-12973, but if HDFS-10777 is fixed, this occurs 
again.

> DU should be re-executed if the target directory exists
> ---
>
> Key: HDFS-8858
> URL: https://issues.apache.org/jira/browse/HDFS-8858
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Akira Ajisaka
>Priority: Minor
>
> Unix DU command rarely fails when a child file/directory of the target path 
> is being moved or deleted. I'm thinking we should re-try DU if the target 
> path exists, to avoid failure in writing replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10861) Refactor StatefulStripeReader and PositionStripeReader, use ECChunk version decode API

2016-09-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15489819#comment-15489819
 ] 

Hadoop QA commented on HDFS-10861:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 30s{color} | {color:orange} root: The patch generated 2 new + 64 unchanged - 
4 fixed = 66 total (was 68) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
22s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | HDFS-10861 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12828409/HDFS-10861-v2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e94483e07cfb 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ea0c2b8 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16738/artifact/patchprocess/diff-checkstyle-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/16738/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-client U: . |
| Console output | 

[jira] [Updated] (HDFS-10861) Refactor StatefulStripeReader and PositionStripeReader, use ECChunk version decode API

2016-09-14 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-10861:
-
Attachment: HDFS-10861-v2.patch

Fixed 3 checkstyle issues. Other 2 checkstyle issues are not necessary. 

> Refactor StatefulStripeReader and PositionStripeReader, use ECChunk version 
> decode API
> --
>
> Key: HDFS-10861
> URL: https://issues.apache.org/jira/browse/HDFS-10861
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: SammiChen
>Assignee: SammiChen
> Attachments: HDFS-10861-v1.patch, HDFS-10861-v2.patch
>
>
> Refactor StatefulStripeReader and PositionStripeReader, use ECChunk version 
> decode API. After the refactor, it is approaching very near now to the ideal 
> state desired by next step, employing ErasureCoder API instead of 
> RawErasureCoder API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org