[jira] [Commented] (HDFS-10673) Optimize FSPermissionChecker's internal path usage

2017-12-13 Thread Jack Bearden (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290484#comment-16290484
 ] 

Jack Bearden commented on HDFS-10673:
-

Since the NullPointerException is being thrown from a rather benign function, 
it seems safe to just guard against the null element at position 0. I could 
very well be mistaken, however. The following code change appears to remedy the 
edge case on my dev cluster:

{code}
// FSPermissionChecker#getINodeAttrs
if (i == 0 && pathByNameArr[i] == null) {
elements[i] = "";
} else {
elements[i] = DFSUtil.bytes2String(pathByNameArr[i]);
}
{code}


> Optimize FSPermissionChecker's internal path usage
> --
>
> Key: HDFS-10673
> URL: https://issues.apache.org/jira/browse/HDFS-10673
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-10673-branch-2.7.00.patch, HDFS-10673.1.patch, 
> HDFS-10673.2.patch, HDFS-10673.patch
>
>
> The INodeAttributeProvider and AccessControlEnforcer features degrade 
> performance and generate excessive garbage even when neither is used.  Main 
> issues:
> # A byte[][] of components is unnecessarily created.  Each path component 
> lookup converts a subrange of the byte[][] to a new String[] - then not used 
> by default attribute provider.
> # Subaccess checks are insanely expensive.  The full path of every subdir is 
> created by walking up the inode tree, creating a INode[], building a string 
> by converting each inode's byte[] name to a string, etc.  Which will only be 
> used if there's an exception.
> The expensive of #1 should only be incurred when using the provider/enforcer 
> feature.  For #2, paths should be created on-demand for exceptions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12916) HDFS commands throws error, when only shaded clients in classpath

2017-12-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290409#comment-16290409
 ] 

Allen Wittenauer commented on HDFS-12916:
-

bq. Changed the hadoop-layout.sh mapred_home hadoop_hdfs_home etc. to point to 
only hdfs client jar location and copied all tools and shaded client jars .

That just broke all of the admin commands and a good chunk of hadoop-tools.

bq. Actually this is done as part of work to use shaded client jars for running 
hdfs commands and also not to expose other run time jars to clients. And also 
use same jars for running hdfs commands.

The vast majority of hadoop shell commands are *not* client-level commands and 
actually need all of those jars.  This means that classpath construction has to 
take place on a per command basis if one truly wants to hide all of the extra 
jars.  That, in turn, means a ton of extra code in the shell scripts.

I brought up the idea of having the default classpath with shaded jars back in 
April (https://s.apache.org/LTzv). Given how much Hortonworks, Yahoo!, and 
Cloudera have been fighting against backwards incompatibilities breaking 
rolling upgrade despite this being a major release, it likely would have been a 
wasted effort anyway.


> HDFS commands throws error, when only shaded clients in classpath
> -
>
> Key: HDFS-12916
> URL: https://issues.apache.org/jira/browse/HDFS-12916
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> [root@n001 hadoop]# bin/hdfs dfs -rm /
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/htrace/core/Tracer$Builder
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:303)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.htrace.core.Tracer$Builder
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 4 more
> cc [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12895:
-
Attachment: HDFS-12895.006.patch

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12895.001.patch, HDFS-12895.002.patch, 
> HDFS-12895.003.patch, HDFS-12895.004.patch, HDFS-12895.005.patch, 
> HDFS-12895.006.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-13 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290401#comment-16290401
 ] 

Yiqun Lin commented on HDFS-12895:
--

bq. Not sure about user and group; show a default one? 
[~elgoiri], you raised a good point that we should be compatible with old 
no-permissions mount table entry. We can make these old entries with super 
user, supergroup, 755 mode as the default permissions. So this will lead a 
incompatible change that non-supersuer won't modify its old mount table 
entries. He should login as superuser first and update its mount table 
permission infos.
Then he can manage his mount tables correctly.
Other comments are addressed in update patch.
Attach the updated patch.

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12895.001.patch, HDFS-12895.002.patch, 
> HDFS-12895.003.patch, HDFS-12895.004.patch, HDFS-12895.005.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12916) HDFS commands throws error, when only shaded clients in classpath

2017-12-13 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290391#comment-16290391
 ] 

Bharat Viswanadham commented on HDFS-12916:
---

[~aw]

bq. a) What is the goal here?
The goal is to run hdfs commands with shaded client jars.

bq. b) What exact surgery was performed to make that happen, since that's not 
how Hadoop ships out of the box.
Changed the hadoop-layout.sh mapred_home hadoop_hdfs_home etc. to point to only 
hdfs client jar location and copied all tools and shaded client jars .

So, for this to work, we need to copy below jars in to clients location to make 
hdfs commands work then. Is this the solution.

Actually this is done as part of work to use shaded client jars for running 
hdfs commands and also not to expose other run time jars to clients. And also 
use same jars for running hdfs commands.

> HDFS commands throws error, when only shaded clients in classpath
> -
>
> Key: HDFS-12916
> URL: https://issues.apache.org/jira/browse/HDFS-12916
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> [root@n001 hadoop]# bin/hdfs dfs -rm /
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/htrace/core/Tracer$Builder
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:303)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.htrace.core.Tracer$Builder
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 4 more
> cc [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12916) HDFS commands throws error, when only shaded clients in classpath

2017-12-13 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290391#comment-16290391
 ] 

Bharat Viswanadham edited comment on HDFS-12916 at 12/14/17 6:17 AM:
-

[~aw]
Thanks for your reply.

bq. a) What is the goal here?
The goal is to run hdfs commands with shaded client jars.

bq. b) What exact surgery was performed to make that happen, since that's not 
how Hadoop ships out of the box.
Changed the hadoop-layout.sh mapred_home hadoop_hdfs_home etc. to point to only 
hdfs client jar location and copied all tools and shaded client jars .

So, for this to work, we need to copy below jars in to clients location to make 
hdfs commands work then. Is this the solution.

Actually this is done as part of work to use shaded client jars for running 
hdfs commands and also not to expose other run time jars to clients. And also 
use same jars for running hdfs commands.


was (Author: bharatviswa):
[~aw]

bq. a) What is the goal here?
The goal is to run hdfs commands with shaded client jars.

bq. b) What exact surgery was performed to make that happen, since that's not 
how Hadoop ships out of the box.
Changed the hadoop-layout.sh mapred_home hadoop_hdfs_home etc. to point to only 
hdfs client jar location and copied all tools and shaded client jars .

So, for this to work, we need to copy below jars in to clients location to make 
hdfs commands work then. Is this the solution.

Actually this is done as part of work to use shaded client jars for running 
hdfs commands and also not to expose other run time jars to clients. And also 
use same jars for running hdfs commands.

> HDFS commands throws error, when only shaded clients in classpath
> -
>
> Key: HDFS-12916
> URL: https://issues.apache.org/jira/browse/HDFS-12916
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> [root@n001 hadoop]# bin/hdfs dfs -rm /
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/htrace/core/Tracer$Builder
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:303)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.htrace.core.Tracer$Builder
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 4 more
> cc [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12915) Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy

2017-12-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290366#comment-16290366
 ] 

genericqa commented on HDFS-12915:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
41s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 48s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs generated 0 new + 0 
unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}165m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12915 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901989/HDFS-12915.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f0fcad960dc3 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 438c1d3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22398/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 

[jira] [Commented] (HDFS-12916) HDFS commands throws error, when only shaded clients in classpath

2017-12-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290364#comment-16290364
 ] 

Allen Wittenauer commented on HDFS-12916:
-

bq. HDFS commands throws error, when only shaded clients in classpath

a) What is the goal here? 

b) What exact surgery was performed to make that happen, since that's not how 
Hadoop ships out of the box.

bq. After adding below jars to classpath, commands started working

Correct.  Clients are expected to define their own logging.  That's one of the 
key features of using the shaded jars.  htrace is a bit of a surprise, but that 
might be expected too.  Thus far, I'm not really seeing a bug here.


> HDFS commands throws error, when only shaded clients in classpath
> -
>
> Key: HDFS-12916
> URL: https://issues.apache.org/jira/browse/HDFS-12916
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> [root@n001 hadoop]# bin/hdfs dfs -rm /
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/htrace/core/Tracer$Builder
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:303)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.htrace.core.Tracer$Builder
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 4 more
> cc [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12883) RBF: Document Router and State Store metrics

2017-12-13 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290363#comment-16290363
 ] 

Yiqun Lin edited comment on HDFS-12883 at 12/14/17 5:33 AM:


Thanks for the comment, [~ajisakaa].
bq. We need to add 'incompatible change' flag and document a release note.
I will do this after committing this.


was (Author: linyiqun):
Thanks for the comment, [~ajisakaa].
bq. We need to add 'incompatible change' flag and document a release note.
I will done this.after committing this.

> RBF: Document Router and State Store metrics
> 
>
> Key: HDFS-12883
> URL: https://issues.apache.org/jira/browse/HDFS-12883
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12883.001.patch, HDFS-12883.002.patch, 
> metric-screen-shot.jpg
>
>
> Document Router and State Store metrics in doc. This will be helpful for 
> users to monitor RBF.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12883) RBF: Document Router and State Store metrics

2017-12-13 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290363#comment-16290363
 ] 

Yiqun Lin commented on HDFS-12883:
--

Thanks for the comment, [~ajisakaa].
bq. We need to add 'incompatible change' flag and document a release note.
I will done this.after committing this.

> RBF: Document Router and State Store metrics
> 
>
> Key: HDFS-12883
> URL: https://issues.apache.org/jira/browse/HDFS-12883
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12883.001.patch, HDFS-12883.002.patch, 
> metric-screen-shot.jpg
>
>
> Document Router and State Store metrics in doc. This will be helpful for 
> users to monitor RBF.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12712) [9806] Code style cleanup

2017-12-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290358#comment-16290358
 ] 

genericqa commented on HDFS-12712:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
56s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
4s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
17s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
57s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
55s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-9806 
has 1 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
43s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
55s{color} | {color:green} root generated 0 new + 1237 unchanged - 1 fixed = 
1237 total (was 1238) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 42s{color} | {color:orange} root: The patch generated 1 new + 629 unchanged 
- 22 fixed = 630 total (was 651) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
56s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 1 fixed = 1 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
38s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}117m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}225m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  

[jira] [Commented] (HDFS-12883) RBF: Document Router and State Store metrics

2017-12-13 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290354#comment-16290354
 ] 

Akira Ajisaka commented on HDFS-12883:
--

Thanks [~elgoiri] and [~linyiqun] for the quick responses!
Now I'm +1 to change the context from router to dfs, however, it is 
incompatible change. If there is a release (2.9.0 or/and 3.0.0?) which has 
router context, We need to add 'incompatible change' flag and document a 
release note.

> RBF: Document Router and State Store metrics
> 
>
> Key: HDFS-12883
> URL: https://issues.apache.org/jira/browse/HDFS-12883
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12883.001.patch, HDFS-12883.002.patch, 
> metric-screen-shot.jpg
>
>
> Document Router and State Store metrics in doc. This will be helpful for 
> users to monitor RBF.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12915) Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy

2017-12-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290355#comment-16290355
 ] 

genericqa commented on HDFS-12915:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
49s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs generated 0 new + 0 
unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12915 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901990/HDFS-12915.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 314dbaeb22e5 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 438c1d3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Commented] (HDFS-7101) Potential null dereference in DFSck#doWork()

2017-12-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290329#comment-16290329
 ] 

genericqa commented on HDFS-7101:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
50s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-7101 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12783961/HDFS-7101.v1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a5cf6ae897ee 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 438c1d3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22397/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22397/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-11847) Enhance dfsadmin listOpenFiles command to list files blocking datanode decommissioning

2017-12-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290320#comment-16290320
 ] 

genericqa commented on HDFS-11847:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
1s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-hdfs-project: The patch generated 5 new + 
650 unchanged - 1 fixed = 655 total (was 651) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
23s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}119m 52s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}186m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-11847 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901972/HDFS-11847.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  

[jira] [Commented] (HDFS-12773) RBF: Improve State Store FS implementation

2017-12-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290296#comment-16290296
 ] 

genericqa commented on HDFS-12773:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 28m 
36s{color} | {color:red} root in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
31s{color} | {color:red} hadoop-hdfs in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-hdfs in trunk failed. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 12m 
32s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
26s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 50s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 398 new + 0 unchanged - 
0 fixed = 398 total (was 0) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 60 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
2s{color} | {color:red} The patch 768 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.federation.store.driver.TestStateStoreFile |
|   | hadoop.hdfs.server.federation.store.driver.TestStateStoreFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12773 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901973/HDFS-12773.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e1861a46a05b 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 438c1d3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| 

[jira] [Commented] (HDFS-12883) RBF: Document Router and State Store metrics

2017-12-13 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290262#comment-16290262
 ] 

Yiqun Lin commented on HDFS-12883:
--

[~ajisakaa], since we already have some context defined for each module like 
dfs context, yarn context. So no need to define a more concrete context here I 
means router context. This can be showed under dfs context. So I did this 
change.

> RBF: Document Router and State Store metrics
> 
>
> Key: HDFS-12883
> URL: https://issues.apache.org/jira/browse/HDFS-12883
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12883.001.patch, HDFS-12883.002.patch, 
> metric-screen-shot.jpg
>
>
> Document Router and State Store metrics in doc. This will be helpful for 
> users to monitor RBF.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12915) Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy

2017-12-13 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12915:
-
Attachment: HDFS-12915.01.patch

> Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy
> ---
>
> Key: HDFS-12915
> URL: https://issues.apache.org/jira/browse/HDFS-12915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
> Attachments: HDFS-12915.00.patch, HDFS-12915.01.patch
>
>
> It seems HDFS-12840 creates a new findbugs warning.
> Possible null pointer dereference of replication in 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Bug type NP_NULL_ON_SOME_PATH (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat
> In method 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Value loaded from replication
> Dereferenced at INodeFile.java:[line 210]
> Known null at INodeFile.java:[line 207]
> From a quick look at the patch, it seems bogus though. [~eddyxu][~Sammi] 
> would you please double check?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12915) Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy

2017-12-13 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12915:
-
Attachment: (was: HDFS-12915.01.patch)

> Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy
> ---
>
> Key: HDFS-12915
> URL: https://issues.apache.org/jira/browse/HDFS-12915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
> Attachments: HDFS-12915.00.patch
>
>
> It seems HDFS-12840 creates a new findbugs warning.
> Possible null pointer dereference of replication in 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Bug type NP_NULL_ON_SOME_PATH (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat
> In method 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Value loaded from replication
> Dereferenced at INodeFile.java:[line 210]
> Known null at INodeFile.java:[line 207]
> From a quick look at the patch, it seems bogus though. [~eddyxu][~Sammi] 
> would you please double check?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12915) Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy

2017-12-13 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12915:
-
Attachment: HDFS-12915.01.patch

> Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy
> ---
>
> Key: HDFS-12915
> URL: https://issues.apache.org/jira/browse/HDFS-12915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
> Attachments: HDFS-12915.00.patch, HDFS-12915.01.patch
>
>
> It seems HDFS-12840 creates a new findbugs warning.
> Possible null pointer dereference of replication in 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Bug type NP_NULL_ON_SOME_PATH (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat
> In method 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Value loaded from replication
> Dereferenced at INodeFile.java:[line 210]
> Known null at INodeFile.java:[line 207]
> From a quick look at the patch, it seems bogus though. [~eddyxu][~Sammi] 
> would you please double check?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12883) RBF: Document Router and State Store metrics

2017-12-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290257#comment-16290257
 ] 

Íñigo Goiri commented on HDFS-12883:


Right now, we have:
{{@Metrics(name = "RouterRPCActivity", about = "Router RPC Activity", context = 
"router")}}
{{@Metrics(name = "StateStoreActivity", about = "Router metrics", context = 
"router")}}
But [~linyiqun] change would send it to {{context="dfs"}}.

> RBF: Document Router and State Store metrics
> 
>
> Key: HDFS-12883
> URL: https://issues.apache.org/jira/browse/HDFS-12883
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12883.001.patch, HDFS-12883.002.patch, 
> metric-screen-shot.jpg
>
>
> Document Router and State Store metrics in doc. This will be helpful for 
> users to monitor RBF.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12915) Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy

2017-12-13 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12915:
-
Attachment: (was: HDFS-12915.01.patch)

> Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy
> ---
>
> Key: HDFS-12915
> URL: https://issues.apache.org/jira/browse/HDFS-12915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
> Attachments: HDFS-12915.00.patch
>
>
> It seems HDFS-12840 creates a new findbugs warning.
> Possible null pointer dereference of replication in 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Bug type NP_NULL_ON_SOME_PATH (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat
> In method 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Value loaded from replication
> Dereferenced at INodeFile.java:[line 210]
> Known null at INodeFile.java:[line 207]
> From a quick look at the patch, it seems bogus though. [~eddyxu][~Sammi] 
> would you please double check?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12915) Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy

2017-12-13 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12915:
-
Attachment: HDFS-12915.01.patch

{{Preconditions}} doesn't support formatted strings? OK.

> Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy
> ---
>
> Key: HDFS-12915
> URL: https://issues.apache.org/jira/browse/HDFS-12915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
> Attachments: HDFS-12915.00.patch
>
>
> It seems HDFS-12840 creates a new findbugs warning.
> Possible null pointer dereference of replication in 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Bug type NP_NULL_ON_SOME_PATH (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat
> In method 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Value loaded from replication
> Dereferenced at INodeFile.java:[line 210]
> Known null at INodeFile.java:[line 207]
> From a quick look at the patch, it seems bogus though. [~eddyxu][~Sammi] 
> would you please double check?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.

2017-12-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290255#comment-16290255
 ] 

genericqa commented on HDFS-12574:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
56s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 13s{color} | {color:orange} root: The patch generated 8 new + 698 unchanged 
- 1 fixed = 706 total (was 699) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 59s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
45s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
45s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 36s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}187m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  org.apache.hadoop.hdfs.web.WebHdfsFileSystem.open(Path, int) may fail to 
close stream  At WebHdfsFileSystem.java:close stream  At 
WebHdfsFileSystem.java:[line 1439] |
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsContentLength |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12574 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-12883) RBF: Document Router and State Store metrics

2017-12-13 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290246#comment-16290246
 ] 

Akira Ajisaka commented on HDFS-12883:
--

What context does the Router metrics have? If Router metrics has 'dfs' context, 
the document of metrics should be in the dfs context.

> RBF: Document Router and State Store metrics
> 
>
> Key: HDFS-12883
> URL: https://issues.apache.org/jira/browse/HDFS-12883
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12883.001.patch, HDFS-12883.002.patch, 
> metric-screen-shot.jpg
>
>
> Document Router and State Store metrics in doc. This will be helpful for 
> users to monitor RBF.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12818) Support multiple storages in DataNodeCluster / SimulatedFSDataset

2017-12-13 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290244#comment-16290244
 ] 

Konstantin Shvachko commented on HDFS-12818:


I was wondering if by default when {{simulatedCapacities != null}} we can set 
{{storagesPerDatanode(1)}}. Then you wont need to change all these tests?

> Support multiple storages in DataNodeCluster / SimulatedFSDataset
> -
>
> Key: HDFS-12818
> URL: https://issues.apache.org/jira/browse/HDFS-12818
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, test
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HDFS-12818.000.patch, HDFS-12818.001.patch, 
> HDFS-12818.002.patch, HDFS-12818.003.patch, HDFS-12818.004.patch, 
> HDFS-12818.005.patch, HDFS-12818.006.patch, HDFS-12818.007.patch
>
>
> Currently {{SimulatedFSDataset}} (and thus, {{DataNodeCluster}} with 
> {{-simulated}}) only supports a single storage per {{DataNode}}. Given that 
> the number of storages can have important implications on the performance of 
> block report processing, it would be useful for these classes to support a 
> multiple storage configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-10673) Optimize FSPermissionChecker's internal path usage

2017-12-13 Thread Jack Bearden (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290222#comment-16290222
 ] 

Jack Bearden edited comment on HDFS-10673 at 12/14/17 2:02 AM:
---

Hey guys, thanks a lot for your work on this great optimization. 

There may be an edge case that is not being handled by the refactored code in 
2.7.4. When overriding an {{INodeAttributeProvider}}, I get the following 
NullPointerException:

{code}
java.lang.NullPointerException
at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:315)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.getINodeAttrs(FSPermissionChecker.java:247)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:192)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3820)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1767)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211)
{code}

[This|https://github.com/apache/hadoop/blob/branch-2.7.4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java#L244]
 is where the code diverges when an {{INodeAttributeProvider}} is provided.

[This|https://github.com/apache/hadoop/blob/branch-2.7.3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java#L179]
 is the null check that was in 2.7.3 and removed in 2.7.4.

I only encounter the NullPointerException when doing a {{hdfs dfs -ls /}} and 
other permutations of it.

[~zhz] could you please take a look?

Edit:
Also, I forgot to mention, this case only occurs for regular users that are not 
{{FSPermissionChecker#isSuperUser()}}


was (Author: jackbearden):
Hey guys, thanks a lot for your work on this great optimization. 

There may be an edge case that is not being handled by the refactored code in 
2.7.4. When overriding an {{INodeAttributeProvider}}, I get the following 
NullPointerException:

{code}
java.lang.NullPointerException
at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:315)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.getINodeAttrs(FSPermissionChecker.java:247)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:192)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3820)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1767)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211)
{code}


[jira] [Commented] (HDFS-12907) Allow read-only access to reserved raw for non-superusers

2017-12-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290229#comment-16290229
 ] 

genericqa commented on HDFS-12907:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 2s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 98 unchanged - 0 fixed = 105 total (was 98) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m  1s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
59s{color} | {color:red} The patch generated 121 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}121m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:17 |
| Failed junit tests | hadoop.hdfs.TestFileAppendRestart |
|   | hadoop.hdfs.web.TestWebHDFS |
|   | hadoop.hdfs.web.TestFSMainOperationsWebHdfs |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
|   | org.apache.hadoop.hdfs.TestHdfsAdmin |
|   | org.apache.hadoop.hdfs.TestSetrepDecreasing |
|   | org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure |
|   | org.apache.hadoop.hdfs.TestDatanodeDeath |
|   | org.apache.hadoop.hdfs.TestPread |
|   | org.apache.hadoop.hdfs.TestSafeMode |
|   | org.apache.hadoop.hdfs.TestDFSFinalize |
|   | org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS |
|   | org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer |
|   | org.apache.hadoop.hdfs.TestBlockStoragePolicy |
|   | org.apache.hadoop.hdfs.TestRollingUpgrade |
|   | org.apache.hadoop.hdfs.TestDFSInputStream |
|   | org.apache.hadoop.hdfs.TestRollingUpgradeRollback |
|   | org.apache.hadoop.hdfs.TestEncryptedTransfer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:17213a0 |
| JIRA Issue | HDFS-12907 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901962/HDFS-12907.branch-2.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d256c6a8bd5a 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| 

[jira] [Comment Edited] (HDFS-7101) Potential null dereference in DFSck#doWork()

2017-12-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124972#comment-15124972
 ] 

Ted Yu edited comment on HDFS-7101 at 12/14/17 1:53 AM:


Agreed.


was (Author: yuzhih...@gmail.com):
Agree.

> Potential null dereference in DFSck#doWork()
> 
>
> Key: HDFS-7101
> URL: https://issues.apache.org/jira/browse/HDFS-7101
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.5.1
>Reporter: Ted Yu
>Assignee: skrho
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-7101.v1.patch, HDFS-7101_001.patch
>
>
> {code}
> String lastLine = null;
> int errCode = -1;
> try {
>   while ((line = input.readLine()) != null) {
> ...
> if (lastLine.endsWith(NamenodeFsck.HEALTHY_STATUS)) {
>   errCode = 0;
> {code}
> If readLine() throws exception, lastLine may be null, leading to NPE.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10673) Optimize FSPermissionChecker's internal path usage

2017-12-13 Thread Jack Bearden (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290222#comment-16290222
 ] 

Jack Bearden commented on HDFS-10673:
-

Hey guys, thanks a lot for your work on this great optimization. 

There may be an edge case that is not being handled by the refactored code in 
2.7.4. When overriding an {{INodeAttributeProvider}}, I get the following 
NullPointerException:

{code}
java.lang.NullPointerException
at org.apache.hadoop.hdfs.DFSUtil.bytes2String(DFSUtil.java:315)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.getINodeAttrs(FSPermissionChecker.java:247)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:192)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3820)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1767)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211)
{code}

[This|https://github.com/apache/hadoop/blob/branch-2.7.4/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java#L244]
 is where the code diverges when an {{INodeAttributeProvider}} is provided.

[This|https://github.com/apache/hadoop/blob/branch-2.7.3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java#L179]
 is the null check that was in 2.7.3 and removed in 2.7.4.

I only encounter the NullPointerException when doing a {{hdfs dfs -ls /}} and 
other permutations of it.

[~zhz] could you please take a look?


> Optimize FSPermissionChecker's internal path usage
> --
>
> Key: HDFS-10673
> URL: https://issues.apache.org/jira/browse/HDFS-10673
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-10673-branch-2.7.00.patch, HDFS-10673.1.patch, 
> HDFS-10673.2.patch, HDFS-10673.patch
>
>
> The INodeAttributeProvider and AccessControlEnforcer features degrade 
> performance and generate excessive garbage even when neither is used.  Main 
> issues:
> # A byte[][] of components is unnecessarily created.  Each path component 
> lookup converts a subrange of the byte[][] to a new String[] - then not used 
> by default attribute provider.
> # Subaccess checks are insanely expensive.  The full path of every subdir is 
> created by walking up the inode tree, creating a INode[], building a string 
> by converting each inode's byte[] name to a string, etc.  Which will only be 
> used if there's an exception.
> The expensive of #1 should only be incurred when using the provider/enforcer 
> feature.  For #2, paths should be created on-demand for exceptions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12915) Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy

2017-12-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290204#comment-16290204
 ] 

genericqa commented on HDFS-12915:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
3s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 42s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 13 new + 22 unchanged - 0 fixed = 35 total (was 22) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs generated 0 new + 0 
unchanged - 1 fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}115m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestDatanodeDeath |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.namenode.TestINodeFile |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.server.namenode.TestStripedINodeFile |
|   | hadoop.hdfs.qjournal.client.TestQuorumJournalManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12915 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901954/HDFS-12915.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1559ed851bb4 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HDFS-12712) [9806] Code style cleanup

2017-12-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12712:
--
Status: Patch Available  (was: Open)

> [9806] Code style cleanup
> -
>
> Key: HDFS-12712
> URL: https://issues.apache.org/jira/browse/HDFS-12712
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Minor
> Attachments: HDFS-12712-HDFS-9806.001.patch, 
> HDFS-12712-HDFS-9806.002.patch
>
>
> The code for HDFS-9806 could use some style cleaning before merging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12712) [9806] Code style cleanup

2017-12-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12712:
--
Attachment: HDFS-12712-HDFS-9806.002.patch

Fixed the findbugs and checkstyle issues in [^HDFS-12712-HDFS-9806.002.patch].

> [9806] Code style cleanup
> -
>
> Key: HDFS-12712
> URL: https://issues.apache.org/jira/browse/HDFS-12712
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Minor
> Attachments: HDFS-12712-HDFS-9806.001.patch, 
> HDFS-12712-HDFS-9806.002.patch
>
>
> The code for HDFS-9806 could use some style cleaning before merging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12712) [9806] Code style cleanup

2017-12-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12712:
--
Status: Open  (was: Patch Available)

> [9806] Code style cleanup
> -
>
> Key: HDFS-12712
> URL: https://issues.apache.org/jira/browse/HDFS-12712
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Minor
> Attachments: HDFS-12712-HDFS-9806.001.patch, 
> HDFS-12712-HDFS-9806.002.patch
>
>
> The code for HDFS-9806 could use some style cleaning before merging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12773) RBF: Improve State Store FS implementation

2017-12-13 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12773:
---
Attachment: HDFS-12773.002.patch

> RBF: Improve State Store FS implementation
> --
>
> Key: HDFS-12773
> URL: https://issues.apache.org/jira/browse/HDFS-12773
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HDFS-12773.000.patch, HDFS-12773.001.patch, 
> HDFS-12773.002.patch
>
>
> HDFS-10630 introduced a filesystem implementation of the State Store for unit 
> tests. However, this implementation doesn't handle multiple writers 
> concurrently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11847) Enhance dfsadmin listOpenFiles command to list files blocking datanode decommissioning

2017-12-13 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11847:
--
Status: Patch Available  (was: Open)

> Enhance dfsadmin listOpenFiles command to list files blocking datanode 
> decommissioning
> --
>
> Key: HDFS-11847
> URL: https://issues.apache.org/jira/browse/HDFS-11847
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11847.01.patch
>
>
> HDFS-10480 adds {{listOpenFiles}} option is to {{dfsadmin}} command to list 
> all the open files in the system.
> Additionally, it would be very useful to only list open files that are 
> blocking the DataNode decommissioning. With thousand+ node clusters, where 
> there might be machines added and removed regularly for maintenance, any 
> option to monitor and debug decommissioning status is very helpful. Proposal 
> here is to add suboptions to {{listOpenFiles}} for the above case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11847) Enhance dfsadmin listOpenFiles command to list files blocking datanode decommissioning

2017-12-13 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-11847:
--
Attachment: HDFS-11847.01.patch

Attached v01 patch to address the following:
1. Ability to query for open files with interested types - like 
BLOCKING_DECOMMISSION, BLOCKING_DECOMMISSION, ALL, etc.,
2. A new method FSNamesystem#getFilesBlockingDecom() to get the list of all 
open files blocking deocmmissiom
3. Basic tests. More sophisticated tests pending.

> Enhance dfsadmin listOpenFiles command to list files blocking datanode 
> decommissioning
> --
>
> Key: HDFS-11847
> URL: https://issues.apache.org/jira/browse/HDFS-11847
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11847.01.patch
>
>
> HDFS-10480 adds {{listOpenFiles}} option is to {{dfsadmin}} command to list 
> all the open files in the system.
> Additionally, it would be very useful to only list open files that are 
> blocking the DataNode decommissioning. With thousand+ node clusters, where 
> there might be machines added and removed regularly for maintenance, any 
> option to monitor and debug decommissioning status is very helpful. Proposal 
> here is to add suboptions to {{listOpenFiles}} for the above case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7765) FSOutputSummer throwing ArrayIndexOutOfBoundsException

2017-12-13 Thread wan kun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290169#comment-16290169
 ] 

wan kun commented on HDFS-7765:
---

[~kturner]
{code:java}
 public synchronized void write(int b) throws IOException {
 buf[count++] = (byte)b;
 if(count == buf.length) {
   flushBuffer();
{code}
If the flushBuffer() throw IOException in the first time ,the count will larger 
than buf.length  the second time.



> FSOutputSummer throwing ArrayIndexOutOfBoundsException
> --
>
> Key: HDFS-7765
> URL: https://issues.apache.org/jira/browse/HDFS-7765
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
> Environment: Centos 6, Open JDK 7, Amazon EC2, Accumulo 1.6.2RC4
>Reporter: Keith Turner
>Assignee: Janmejay Singh
> Attachments: 
> 0001-PATCH-HDFS-7765-FSOutputSummer-throwing-ArrayIndexOu.patch, 
> HDFS-7765.patch
>
>
> While running an Accumulo test, saw exceptions like the following while 
> trying to write to write ahead log in HDFS. 
> The exception occurrs at 
> [FSOutputSummer.java:76|https://github.com/apache/hadoop/blob/release-2.6.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSOutputSummer.java#L76]
>  which is attempting to update a byte array.
> {noformat}
> 2015-02-06 19:46:49,769 [log.DfsLogger] WARN : Exception syncing 
> java.lang.reflect.InvocationTargetException
> java.lang.ArrayIndexOutOfBoundsException: 4608
> at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:76)
> at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:50)
> at java.io.DataOutputStream.write(DataOutputStream.java:88)
> at java.io.DataOutputStream.writeByte(DataOutputStream.java:153)
> at 
> org.apache.accumulo.tserver.logger.LogFileKey.write(LogFileKey.java:87)
> at org.apache.accumulo.tserver.log.DfsLogger.write(DfsLogger.java:526)
> at 
> org.apache.accumulo.tserver.log.DfsLogger.logFileData(DfsLogger.java:540)
> at 
> org.apache.accumulo.tserver.log.DfsLogger.logManyTablets(DfsLogger.java:573)
> at 
> org.apache.accumulo.tserver.log.TabletServerLogger$6.write(TabletServerLogger.java:373)
> at 
> org.apache.accumulo.tserver.log.TabletServerLogger.write(TabletServerLogger.java:274)
> at 
> org.apache.accumulo.tserver.log.TabletServerLogger.logManyTablets(TabletServerLogger.java:365)
> at 
> org.apache.accumulo.tserver.TabletServer$ThriftClientHandler.flush(TabletServer.java:1667)
> at 
> org.apache.accumulo.tserver.TabletServer$ThriftClientHandler.closeUpdate(TabletServer.java:1754)
> at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.accumulo.trace.instrument.thrift.RpcServerInvocationHandler.invoke(RpcServerInvocationHandler.java:46)
> at 
> org.apache.accumulo.server.util.RpcWrapper$1.invoke(RpcWrapper.java:47)
> at com.sun.proxy.$Proxy22.closeUpdate(Unknown Source)
> at 
> org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$closeUpdate.getResult(TabletClientService.java:2370)
> at 
> org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$closeUpdate.getResult(TabletClientService.java:2354)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at 
> org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:168)
> at 
> org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:516)
> at 
> org.apache.accumulo.server.util.CustomNonBlockingServer$1.run(CustomNonBlockingServer.java:77)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at 
> org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:47)
> at 
> org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
> at java.lang.Thread.run(Thread.java:744)
> 2015-02-06 19:46:49,769 [log.DfsLogger] WARN : Exception syncing 
> java.lang.reflect.InvocationTargetException
> 2015-02-06 19:46:49,772 [log.DfsLogger] ERROR: 
> java.lang.ArrayIndexOutOfBoundsException: 4609
> java.lang.ArrayIndexOutOfBoundsException: 4609
> at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:76)
> at 
> 

[jira] [Commented] (HDFS-12919) RBF: support erasure coding methods in RouterRpcServer

2017-12-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290165#comment-16290165
 ] 

Íñigo Goiri commented on HDFS-12919:


[^HDFS-12919.000.patch] might be a little heavy to review.
We can go with a simpler patch where we don't throw an exception for the 
{{setErasureCodingPolicy()}} and implement the EC stuff in a different JIRA.
Is there any end-to-end test that submits MR jobs and uses a mini cluster?

> RBF: support erasure coding methods in RouterRpcServer
> --
>
> Key: HDFS-12919
> URL: https://issues.apache.org/jira/browse/HDFS-12919
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>  Labels: RBF
> Attachments: HDFS-12919.000.patch
>
>
> MAPREDUCE-6954 started to tune the erasure coding settings for staging files. 
> However, the {{Router}} does not support this operation and throws:
> {code}
> 17/12/12 14:36:07 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
> /tmp/hadoop-yarn/staging/hadoop/.staging/job_1513116010218_0002
> org.apache.hadoop.ipc.RemoteException(java.lang.UnsupportedOperationException):
>  Operation "setErasureCodingPolicy" is not supported
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.checkOperation(RouterRpcServer.java:368)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.setErasureCodingPolicy(RouterRpcServer.java:1805)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-13 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290156#comment-16290156
 ] 

Xiao Chen commented on HDFS-12910:
--

Thanks Stephen for revving! +1 on patch 5 once checkstyle and whitespace are 
fixed. Verified the tests fail without and pass with the fix.

> Secure Datanode Starter should log the port when it 
> 
>
> Key: HDFS-12910
> URL: https://issues.apache.org/jira/browse/HDFS-12910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Minor
> Attachments: HDFS-12910.001.patch, HDFS-12910.002.patch, 
> HDFS-12910.003.patch, HDFS-12910.004.patch, HDFS-12910.005.patch
>
>
> When running a secure data node, the default ports it uses are 1004 and 1006. 
> Sometimes other OS services can start on these ports causing the DN to fail 
> to start (eg the nfs service can use random ports under 1024).
> When this happens an error is logged by jsvc, but it is confusing as it does 
> not tell you which port it is having issues binding to, for example, when 
> port 1004 is used by another process:
> {code}
> Initializing secure datanode resources
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:105)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> And when port 1006 is used:
> {code}
> Opened streaming server at /0.0.0.0:1004
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:129)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> We should catch the BindException exception and log out the problem 
> address:port and then re-throw the exception to make the problem more clear.
> I will upload a patch for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290141#comment-16290141
 ] 

genericqa commented on HDFS-12910:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m  
4s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
9s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 12 unchanged - 1 fixed = 13 total (was 13) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}153m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12910 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901946/HDFS-12910.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 27926a7d350d 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d447152 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22390/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 

[jira] [Updated] (HDFS-12000) Ozone: Container : Add key versioning support-1

2017-12-13 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12000:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

Thanks [~vagarychen] for the contribution and all for the discussion. I've 
committed the latest patch to the feature branch. 

> Ozone: Container : Add key versioning support-1
> ---
>
> Key: HDFS-12000
> URL: https://issues.apache.org/jira/browse/HDFS-12000
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>  Labels: OzonePostMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-12000-HDFS-7240.001.patch, 
> HDFS-12000-HDFS-7240.002.patch, HDFS-12000-HDFS-7240.003.patch, 
> HDFS-12000-HDFS-7240.004.patch, HDFS-12000-HDFS-7240.005.patch, 
> HDFS-12000-HDFS-7240.007.patch, HDFS-12000-HDFS-7240.008.patch, 
> HDFS-12000-HDFS-7240.009.patch, HDFS-12000-HDFS-7240.010.patch, 
> HDFS-12000-HDFS-7240.011.patch, HDFS-12000-HDFS-7240.012.patch, 
> OzoneVersion.001.pdf
>
>
> The rest interface of ozone supports versioning of keys. This support comes 
> from the containers and how chunks are managed to support this feature. This 
> JIRA tracks that feature. Will post a detailed design doc so that we can talk 
> about this feature.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12712) [9806] Code style cleanup

2017-12-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290122#comment-16290122
 ] 

genericqa commented on HDFS-12712:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
51s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
37s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
29s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
55s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
54s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-9806 
has 1 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
27s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
20s{color} | {color:green} root generated 0 new + 1237 unchanged - 1 fixed = 
1237 total (was 1238) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 43s{color} | {color:orange} root: The patch generated 5 new + 629 unchanged 
- 22 fixed = 634 total (was 651) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
51s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 1 fixed = 1 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 20s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 11s{color} 
| {color:red} hadoop-fs2img in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}236m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  

[jira] [Updated] (HDFS-12907) Allow read-only access to reserved raw for non-superusers

2017-12-13 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12907:
--
Attachment: HDFS-12907.branch-2.004.patch

Attaching branch-2 patch.

> Allow read-only access to reserved raw for non-superusers
> -
>
> Key: HDFS-12907
> URL: https://issues.apache.org/jira/browse/HDFS-12907
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Rushabh S Shah
> Attachments: HDFS-12907.001.patch, HDFS-12907.002.patch, 
> HDFS-12907.003.patch, HDFS-12907.004.patch, HDFS-12907.branch-2.004.patch, 
> HDFS-12907.patch
>
>
> HDFS-6509 added a special /.reserved/raw path prefix to access the raw file 
> contents of EZ files.  In the simplest sense it doesn't return the FE info in 
> the {{LocatedBlocks}} so the dfs client doesn't try to decrypt the data.  
> This facilitates allowing tools like distcp to copy raw bytes.
> Access to the raw hierarchy is restricted to superusers.  This seems like an 
> overly broad restriction designed to prevent non-admins from munging the EZ 
> related xattrs.  I believe we should relax the restriction to allow 
> non-admins to perform read-only operations.  Allowing non-superusers to 
> easily read the raw bytes will be extremely useful for regular users, esp. 
> for enabling webhdfs client-side encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12000) Ozone: Container : Add key versioning support-1

2017-12-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290100#comment-16290100
 ] 

genericqa commented on HDFS-12000:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 5s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
46s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
56s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
35s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}164m  3s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}243m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.namenode.TestUpgradeDomainBlockPlacementPolicy |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.namenode.TestFSImageWithAcl |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.hdfs.server.namenode.TestFSImage |
|   | 

[jira] [Updated] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.

2017-12-13 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12574:
--
Attachment: HDFS-12574.004.patch

Uploading new version of patch.
This has the following changes.
1. Updated test case variable names to be more specific.
2. Change NameNodeWebhdfsMethods class.
3. Rebased on latest version of HDFS-12907 patch.

> Add CryptoInputStream to WebHdfsFileSystem read call.
> -
>
> Key: HDFS-12574
> URL: https://issues.apache.org/jira/browse/HDFS-12574
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: encryption, kms, webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
> Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, 
> HDFS-12574.003.patch, HDFS-12574.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12923) DFS.concat should throw exception if files have different EC policies.

2017-12-13 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290087#comment-16290087
 ] 

Lei (Eddy) Xu commented on HDFS-12923:
--

[~elgoiri] yes, sure.  In {{FSDirConcatOp#verifySrcFiles()}}, it has already 
checked the EC policy ID between files, so a {{HadoopIllegalArgumentException}} 
will be thrown.

{code:title=FSDirConcatOp.java}
private static INodeFile[] verifySrcFiles(...) {

 if(srcINodeFile.getErasureCodingPolicyID() !=
  targetINode.getErasureCodingPolicyID()) {
throw new HadoopIllegalArgumentException("Source file " + src
+ " and target file " + targetIIP.getPath()
+ " have different erasure coding policy");
  }
}
{code}

> DFS.concat should throw exception if files have different EC policies. 
> ---
>
> Key: HDFS-12923
> URL: https://issues.apache.org/jira/browse/HDFS-12923
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Lei (Eddy) Xu
>Priority: Critical
> Fix For: 3.0.0
>
>
> {{DFS#concat}} appends blocks from different files to a single file. However, 
> if these files have different EC policies, or mixed with replicated and EC 
> files, the resulted file would be problematic to read, because the EC codec 
> is defined in INode instead of in a block. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12773) RBF: Improve State Store FS implementation

2017-12-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290090#comment-16290090
 ] 

genericqa commented on HDFS-12773:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
50s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
56s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 1 
unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Possible null pointer dereference in 
org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreFileImpl.getChildren(String)
 due to return value of called method  Dereferenced at 
StateStoreFileImpl.java:org.apache.hadoop.hdfs.server.federation.store.driver.impl.StateStoreFileImpl.getChildren(String)
 due to return value of called method  Dereferenced at 
StateStoreFileImpl.java:[line 139] |
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 |
|   | hadoop.hdfs.server.federation.store.driver.TestStateStoreFile |
|   | hadoop.hdfs.server.federation.store.driver.TestStateStoreFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12773 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-12923) DFS.concat should throw exception if files have different EC policies.

2017-12-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290081#comment-16290081
 ] 

Íñigo Goiri commented on HDFS-12923:


[~eddyxu], could you elaborate on why is not an issue?

> DFS.concat should throw exception if files have different EC policies. 
> ---
>
> Key: HDFS-12923
> URL: https://issues.apache.org/jira/browse/HDFS-12923
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Lei (Eddy) Xu
>Priority: Critical
> Fix For: 3.0.0
>
>
> {{DFS#concat}} appends blocks from different files to a single file. However, 
> if these files have different EC policies, or mixed with replicated and EC 
> files, the resulted file would be problematic to read, because the EC codec 
> is defined in INode instead of in a block. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12907) Allow read-only access to reserved raw for non-superusers

2017-12-13 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290074#comment-16290074
 ] 

Daryn Sharp commented on HDFS-12907:


+1. Will check in tomorrow morning if there are no objections, [~andrew.wang]?

> Allow read-only access to reserved raw for non-superusers
> -
>
> Key: HDFS-12907
> URL: https://issues.apache.org/jira/browse/HDFS-12907
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Rushabh S Shah
> Attachments: HDFS-12907.001.patch, HDFS-12907.002.patch, 
> HDFS-12907.003.patch, HDFS-12907.004.patch, HDFS-12907.patch
>
>
> HDFS-6509 added a special /.reserved/raw path prefix to access the raw file 
> contents of EZ files.  In the simplest sense it doesn't return the FE info in 
> the {{LocatedBlocks}} so the dfs client doesn't try to decrypt the data.  
> This facilitates allowing tools like distcp to copy raw bytes.
> Access to the raw hierarchy is restricted to superusers.  This seems like an 
> overly broad restriction designed to prevent non-admins from munging the EZ 
> related xattrs.  I believe we should relax the restriction to allow 
> non-admins to perform read-only operations.  Allowing non-superusers to 
> easily read the raw bytes will be extremely useful for regular users, esp. 
> for enabling webhdfs client-side encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12915) Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy

2017-12-13 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12915:
-
Status: Patch Available  (was: Open)

> Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy
> ---
>
> Key: HDFS-12915
> URL: https://issues.apache.org/jira/browse/HDFS-12915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
> Attachments: HDFS-12915.00.patch
>
>
> It seems HDFS-12840 creates a new findbugs warning.
> Possible null pointer dereference of replication in 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Bug type NP_NULL_ON_SOME_PATH (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat
> In method 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Value loaded from replication
> Dereferenced at INodeFile.java:[line 210]
> Known null at INodeFile.java:[line 207]
> From a quick look at the patch, it seems bogus though. [~eddyxu][~Sammi] 
> would you please double check?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12915) Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy

2017-12-13 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-12915:
-
Attachment: HDFS-12915.00.patch

> Fix findbugs warning in INodeFile$HeaderFormat.getBlockLayoutRedundancy
> ---
>
> Key: HDFS-12915
> URL: https://issues.apache.org/jira/browse/HDFS-12915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
> Attachments: HDFS-12915.00.patch
>
>
> It seems HDFS-12840 creates a new findbugs warning.
> Possible null pointer dereference of replication in 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Bug type NP_NULL_ON_SOME_PATH (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat
> In method 
> org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
>  Short, Byte)
> Value loaded from replication
> Dereferenced at INodeFile.java:[line 210]
> Known null at INodeFile.java:[line 207]
> From a quick look at the patch, it seems bogus though. [~eddyxu][~Sammi] 
> would you please double check?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12000) Ozone: Container : Add key versioning support-1

2017-12-13 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290046#comment-16290046
 ] 

Xiaoyu Yao commented on HDFS-12000:
---

Thanks [~vagarychen] for the update. +1 for v12 patch pending Jenkins.

> Ozone: Container : Add key versioning support-1
> ---
>
> Key: HDFS-12000
> URL: https://issues.apache.org/jira/browse/HDFS-12000
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>  Labels: OzonePostMerge
> Attachments: HDFS-12000-HDFS-7240.001.patch, 
> HDFS-12000-HDFS-7240.002.patch, HDFS-12000-HDFS-7240.003.patch, 
> HDFS-12000-HDFS-7240.004.patch, HDFS-12000-HDFS-7240.005.patch, 
> HDFS-12000-HDFS-7240.007.patch, HDFS-12000-HDFS-7240.008.patch, 
> HDFS-12000-HDFS-7240.009.patch, HDFS-12000-HDFS-7240.010.patch, 
> HDFS-12000-HDFS-7240.011.patch, HDFS-12000-HDFS-7240.012.patch, 
> OzoneVersion.001.pdf
>
>
> The rest interface of ozone supports versioning of keys. This support comes 
> from the containers and how chunks are managed to support this feature. This 
> JIRA tracks that feature. Will post a detailed design doc so that we can talk 
> about this feature.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12751) Ozone: SCM: update container allocated size to container db for all the open containers in ContainerStateManager#close

2017-12-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16290002#comment-16290002
 ] 

genericqa commented on HDFS-12751:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
34s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
10s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
32s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
21s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
53s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}139m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}220m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeysRatis |
|   | hadoop.ozone.ozShell.TestOzoneShell |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.ksm.TestKeySpaceManager |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.ozone.TestStorageContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12751 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901924/HDFS-12751-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 272c071478f0 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 

[jira] [Commented] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-13 Thread Stephen O'Donnell (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289970#comment-16289970
 ] 

Stephen O'Donnell commented on HDFS-12910:
--

I attached the 005 patch which hopefully addresses the remaining comments. Let 
me know if there are more changes needed.

> Secure Datanode Starter should log the port when it 
> 
>
> Key: HDFS-12910
> URL: https://issues.apache.org/jira/browse/HDFS-12910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Minor
> Attachments: HDFS-12910.001.patch, HDFS-12910.002.patch, 
> HDFS-12910.003.patch, HDFS-12910.004.patch, HDFS-12910.005.patch
>
>
> When running a secure data node, the default ports it uses are 1004 and 1006. 
> Sometimes other OS services can start on these ports causing the DN to fail 
> to start (eg the nfs service can use random ports under 1024).
> When this happens an error is logged by jsvc, but it is confusing as it does 
> not tell you which port it is having issues binding to, for example, when 
> port 1004 is used by another process:
> {code}
> Initializing secure datanode resources
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:105)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> And when port 1006 is used:
> {code}
> Opened streaming server at /0.0.0.0:1004
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:129)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> We should catch the BindException exception and log out the problem 
> address:port and then re-throw the exception to make the problem more clear.
> I will upload a patch for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-13 Thread Stephen O'Donnell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-12910:
-
Attachment: HDFS-12910.005.patch

> Secure Datanode Starter should log the port when it 
> 
>
> Key: HDFS-12910
> URL: https://issues.apache.org/jira/browse/HDFS-12910
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Minor
> Attachments: HDFS-12910.001.patch, HDFS-12910.002.patch, 
> HDFS-12910.003.patch, HDFS-12910.004.patch, HDFS-12910.005.patch
>
>
> When running a secure data node, the default ports it uses are 1004 and 1006. 
> Sometimes other OS services can start on these ports causing the DN to fail 
> to start (eg the nfs service can use random ports under 1024).
> When this happens an error is logged by jsvc, but it is confusing as it does 
> not tell you which port it is having issues binding to, for example, when 
> port 1004 is used by another process:
> {code}
> Initializing secure datanode resources
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:105)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> And when port 1006 is used:
> {code}
> Opened streaming server at /0.0.0.0:1004
> java.net.BindException: Address already in use
> at sun.nio.ch.Net.bind0(Native Method)
> at sun.nio.ch.Net.bind(Net.java:433)
> at sun.nio.ch.Net.bind(Net.java:425)
> at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
> at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:129)
> at 
> org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
> Cannot load daemon
> Service exit with a return value of 3
> {code}
> We should catch the BindException exception and log out the problem 
> address:port and then re-throw the exception to make the problem more clear.
> I will upload a patch for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12912:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> [READ] Fix configuration and implementation of LevelDB-based alias maps
> ---
>
> Key: HDFS-12912
> URL: https://issues.apache.org/jira/browse/HDFS-12912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12912-HDFS-9806.001.patch, 
> HDFS-12912-HDFS-9806.002.patch
>
>
> {{LevelDBFileRegionAliasMap}} fails to create the leveldb store if the 
> directory is absent.
> {{InMemoryAliasMap}} does not support reading from leveldb-based alias map 
> created from {{LevelDBFileRegionAliasMap}} with the block id configured. 
> Further, the configuration for these aliasmaps must be specified using local 
> paths and not as URIs as currently shown in the documentation 
> ({{HdfsProvidedStorage.md}}).
> This JIRA is to fix these issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-13 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289959#comment-16289959
 ] 

Virajith Jalaparti commented on HDFS-12912:
---

Thanks [~elgoiri]. Committing this to the feature branch.

> [READ] Fix configuration and implementation of LevelDB-based alias maps
> ---
>
> Key: HDFS-12912
> URL: https://issues.apache.org/jira/browse/HDFS-12912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12912-HDFS-9806.001.patch, 
> HDFS-12912-HDFS-9806.002.patch
>
>
> {{LevelDBFileRegionAliasMap}} fails to create the leveldb store if the 
> directory is absent.
> {{InMemoryAliasMap}} does not support reading from leveldb-based alias map 
> created from {{LevelDBFileRegionAliasMap}} with the block id configured. 
> Further, the configuration for these aliasmaps must be specified using local 
> paths and not as URIs as currently shown in the documentation 
> ({{HdfsProvidedStorage.md}}).
> This JIRA is to fix these issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12920) HDFS default value change (with adding time unit) breaks old version MR tarball work with new version (3.0) of hadoop

2017-12-13 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289960#comment-16289960
 ] 

Chris Douglas commented on HDFS-12920:
--

I understand the issue, Junping. An alternative to reverting the change is to 
deprecate the old property and create a new one that understands time units, as 
was raised in that JIRA. If specifying units breaks rolling upgrades, then what 
is the point of adding units, ever?

> HDFS default value change (with adding time unit) breaks old version MR 
> tarball work with new version (3.0) of hadoop
> -
>
> Key: HDFS-12920
> URL: https://issues.apache.org/jira/browse/HDFS-12920
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Junping Du
>Priority: Blocker
>
> After HADOOP-15059 get resolved. I tried to deploy 2.9.0 tar ball with 3.0.0 
> RC1, and run the job with following errors:
> {noformat}
> 2017-12-12 13:29:06,824 INFO [main] 
> org.apache.hadoop.service.AbstractService: Service 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; cause: 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:542)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1764)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:308)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1722)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1719)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1650)
> {noformat}
> This is because HDFS-10845, we are adding time unit to hdfs-default.xml but 
> it cannot be recognized by old version MR jars. 
> This break our rolling upgrade story, so should mark as blocker.
> A quick workaround is to add values in hdfs-site.xml with removing all time 
> unit. But the right way may be to revert HDFS-10845 (and get rid of noisy 
> warnings).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12923) DFS.concat should throw exception if files have different EC policies.

2017-12-13 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu resolved HDFS-12923.
--
   Resolution: Won't Fix
Fix Version/s: 3.0.0

Resolved as not an issue.

> DFS.concat should throw exception if files have different EC policies. 
> ---
>
> Key: HDFS-12923
> URL: https://issues.apache.org/jira/browse/HDFS-12923
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0
>Reporter: Lei (Eddy) Xu
>Priority: Critical
> Fix For: 3.0.0
>
>
> {{DFS#concat}} appends blocks from different files to a single file. However, 
> if these files have different EC policies, or mixed with replicated and EC 
> files, the resulted file would be problematic to read, because the EC codec 
> is defined in INode instead of in a block. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11201) Spelling errors in the logging, help, assertions and exception messages

2017-12-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289928#comment-16289928
 ] 

genericqa commented on HDFS-11201:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-11201 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11201 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843481/HDFS-11201.4.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22388/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Spelling errors in the logging, help, assertions and exception messages
> ---
>
> Key: HDFS-11201
> URL: https://issues.apache.org/jira/browse/HDFS-11201
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, diskbalancer, httpfs, namenode, nfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Trivial
> Fix For: 3.0.1
>
> Attachments: HDFS-11201.1.patch, HDFS-11201.2.patch, 
> HDFS-11201.3.patch, HDFS-11201.4.patch
>
>
> Found a set of spelling errors in the user-facing code.
> Examples are:
> odlest -> oldest
> Illagal -> Illegal
> bounday -> boundary



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11942) make new chooseDataNode policy work in more operation like seek, fetch

2017-12-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11942:
---
Fix Version/s: (was: 3.0.0)
   3.0.1

> make new  chooseDataNode policy  work in more operation like seek, fetch
> 
>
> Key: HDFS-11942
> URL: https://issues.apache.org/jira/browse/HDFS-11942
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 2.6.0, 2.7.0, 3.0.0-alpha3
>Reporter: Fangyuan Deng
> Fix For: 3.0.1
>
> Attachments: HDFS-11942.0.patch, HDFS-11942.1.patch, 
> ssd-first-disable(default).png, ssd-first-enable.png
>
>
> in default policy, if a file is ONE_SSD, client will prior read the local 
> disk replica rather than the remote ssd replica.
> but now, the pci-e SSD and 10G ethernet make remote read SSD more faster than 
>  the local disk.
> HDFS-9666 give us a patch,  but the code is not complete and not updated for 
> a long time.
> this sub-task issue give a complete patch and 
> we have tested on three machines [ 32 core cpu, 128G mem , 1000M network, 
> 1.2T HDD, 800G SSD(intel P3600) ].
> with this feather, throughput of hbase table(ONE_SSD) is double of which 
> without this feather



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11201) Spelling errors in the logging, help, assertions and exception messages

2017-12-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11201:
---
Fix Version/s: (was: 3.0.0)
   3.0.1

> Spelling errors in the logging, help, assertions and exception messages
> ---
>
> Key: HDFS-11201
> URL: https://issues.apache.org/jira/browse/HDFS-11201
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, diskbalancer, httpfs, namenode, nfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Trivial
> Fix For: 3.0.1
>
> Attachments: HDFS-11201.1.patch, HDFS-11201.2.patch, 
> HDFS-11201.3.patch, HDFS-11201.4.patch
>
>
> Found a set of spelling errors in the user-facing code.
> Examples are:
> odlest -> oldest
> Illagal -> Illegal
> bounday -> boundary



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12773) RBF: Improve State Store FS implementation

2017-12-13 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12773:
---
Attachment: HDFS-12773.001.patch

> RBF: Improve State Store FS implementation
> --
>
> Key: HDFS-12773
> URL: https://issues.apache.org/jira/browse/HDFS-12773
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HDFS-12773.000.patch, HDFS-12773.001.patch
>
>
> HDFS-10630 introduced a filesystem implementation of the State Store for unit 
> tests. However, this implementation doesn't handle multiple writers 
> concurrently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-13 Thread Lukas Majercak (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289898#comment-16289898
 ] 

Lukas Majercak commented on HDFS-12895:
---

Hi [~linyiqun]. Could we make all the members of {{ACLEntity}} final?

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12895.001.patch, HDFS-12895.002.patch, 
> HDFS-12895.003.patch, HDFS-12895.004.patch, HDFS-12895.005.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289895#comment-16289895
 ] 

Íñigo Goiri commented on HDFS-12912:


Unit tests seem unrelated. +1

> [READ] Fix configuration and implementation of LevelDB-based alias maps
> ---
>
> Key: HDFS-12912
> URL: https://issues.apache.org/jira/browse/HDFS-12912
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12912-HDFS-9806.001.patch, 
> HDFS-12912-HDFS-9806.002.patch
>
>
> {{LevelDBFileRegionAliasMap}} fails to create the leveldb store if the 
> directory is absent.
> {{InMemoryAliasMap}} does not support reading from leveldb-based alias map 
> created from {{LevelDBFileRegionAliasMap}} with the block id configured. 
> Further, the configuration for these aliasmaps must be specified using local 
> paths and not as URIs as currently shown in the documentation 
> ({{HdfsProvidedStorage.md}}).
> This JIRA is to fix these issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12907) Allow read-only access to reserved raw for non-superusers

2017-12-13 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289865#comment-16289865
 ] 

Rushabh S Shah commented on HDFS-12907:
---

Th failed test has nothing to do with my patch. The only change from previous 
patch is fixing whitespace warnings.
TestRetryCacheWithHA#testUpdatePipeline is tracked by HDFS-7524.


> Allow read-only access to reserved raw for non-superusers
> -
>
> Key: HDFS-12907
> URL: https://issues.apache.org/jira/browse/HDFS-12907
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Rushabh S Shah
> Attachments: HDFS-12907.001.patch, HDFS-12907.002.patch, 
> HDFS-12907.003.patch, HDFS-12907.004.patch, HDFS-12907.patch
>
>
> HDFS-6509 added a special /.reserved/raw path prefix to access the raw file 
> contents of EZ files.  In the simplest sense it doesn't return the FE info in 
> the {{LocatedBlocks}} so the dfs client doesn't try to decrypt the data.  
> This facilitates allowing tools like distcp to copy raw bytes.
> Access to the raw hierarchy is restricted to superusers.  This seems like an 
> overly broad restriction designed to prevent non-admins from munging the EZ 
> related xattrs.  I believe we should relax the restriction to allow 
> non-admins to perform read-only operations.  Allowing non-superusers to 
> easily read the raw bytes will be extremely useful for regular users, esp. 
> for enabling webhdfs client-side encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9806:
-
Status: Patch Available  (was: Open)

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch, HDFS-9806.002.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-13 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289809#comment-16289809
 ] 

Virajith Jalaparti edited comment on HDFS-9806 at 12/13/17 8:16 PM:


Fixed the checkstyle and findbugs issues from the last jenkins run as part of 
HDFS-12712. These changes are included in [^HDFS-9806.002.patch].


was (Author: virajith):
Fixed the checkstyle and findbugs issues from the last jenkins run as part of 
HDFS-12712. These changes are included in HDFS-9806.002.patch

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch, HDFS-9806.002.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9806:
-
Attachment: HDFS-9806.002.patch

Fixed the checkstyle and findbugs issues from the last jenkins run as part of 
HDFS-12712. These changes are included in HDFS-9806.002.patch

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch, HDFS-9806.002.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9806) Allow HDFS block replicas to be provided by an external storage system

2017-12-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-9806:
-
Status: Open  (was: Patch Available)

> Allow HDFS block replicas to be provided by an external storage system
> --
>
> Key: HDFS-9806
> URL: https://issues.apache.org/jira/browse/HDFS-9806
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Chris Douglas
> Attachments: HDFS-9806-design.001.pdf, HDFS-9806-design.002.pdf, 
> HDFS-9806.001.patch, HDFS-9806.002.patch
>
>
> In addition to heterogeneous media, many applications work with heterogeneous 
> storage systems. The guarantees and semantics provided by these systems are 
> often similar, but not identical to those of 
> [HDFS|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/index.html].
>  Any client accessing multiple storage systems is responsible for reasoning 
> about each system independently, and must propagate/and renew credentials for 
> each store.
> Remote stores could be mounted under HDFS. Block locations could be mapped to 
> immutable file regions, opaque IDs, or other tokens that represent a 
> consistent view of the data. While correctness for arbitrary operations 
> requires careful coordination between stores, in practice we can provide 
> workable semantics with weaker guarantees.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12712) [9806] Code style cleanup

2017-12-13 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289777#comment-16289777
 ] 

Virajith Jalaparti edited comment on HDFS-12712 at 12/13/17 8:09 PM:
-

Patch v001 cleaning up code-style issues (along with those reported in [this | 
https://issues.apache.org/jira/browse/HDFS-9806?focusedCommentId=16288834=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288834]
 jenkins run on HDFS-9806).


was (Author: virajith):
Patch cleaning up code-style issues (along with those reported in [this | 
https://issues.apache.org/jira/browse/HDFS-9806?focusedCommentId=16288834=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288834]
 jenkins run on HDFS-9806).

> [9806] Code style cleanup
> -
>
> Key: HDFS-12712
> URL: https://issues.apache.org/jira/browse/HDFS-12712
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Minor
> Attachments: HDFS-12712-HDFS-9806.001.patch
>
>
> The code for HDFS-9806 could use some style cleaning before merging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12712) [9806] Code style cleanup

2017-12-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12712:
--
Status: Patch Available  (was: Open)

> [9806] Code style cleanup
> -
>
> Key: HDFS-12712
> URL: https://issues.apache.org/jira/browse/HDFS-12712
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Minor
> Attachments: HDFS-12712-HDFS-9806.001.patch
>
>
> The code for HDFS-9806 could use some style cleaning before merging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12712) [9806] Code style cleanup

2017-12-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12712:
--
Attachment: HDFS-12712-HDFS-9806.001.patch

> [9806] Code style cleanup
> -
>
> Key: HDFS-12712
> URL: https://issues.apache.org/jira/browse/HDFS-12712
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Minor
> Attachments: HDFS-12712-HDFS-9806.001.patch
>
>
> The code for HDFS-9806 could use some style cleaning before merging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289556#comment-16289556
 ] 

Íñigo Goiri edited comment on HDFS-12895 at 12/13/17 7:56 PM:
--

Thanks [~linyiqun] for taking care of the comments. A couple more based on 
[^HDFS-12895.005.patch]:
* It seems we are defaulting to nulls when nothing there. The entries created 
early still show without permissions. We probably want to check for empty 
{{getMode()}} at {{MountTableImpl}}. Not sure about user and group; show a 
default one? Super user? This is what I currently get for old entries:
{code}
Mount Table Entries:
SourceDestinations  Owner 
Group Mode
/ ns0->/null  
null  -
/ns0  ns0->/null  
null  -
/ns1  ns1->/null  
null  -
/ns2  ns2->/null  
null  -
/ns3  ns3->/null  
null  -
{code}
* {{removeMountTableEntry()}} could initialize {{deleteEntry}} right away 
without the null: {{final MountTable deleteEntry = 
getDriver().get(getRecordClass(), query);}}
* For consistency, {{removeMountTableEntry()}} could do the same order of 
{{if}} as {{addMountTableEntry()}} and {{updateMountTableEntry()}} and avoid 
the {{pc}} init if no entry. In addition, the check status could use this if 
structure:
{code}
boolean status = false;
if (deleteEntry != null) {
  RouterPermissionChecker pc = RouterAdminServer.getPermissionChecker();
  if (pc != null) {
pc.checkPermission(deleteEntry, FsAction.WRITE);
  }
  status = getDriver().remove(deleteEntry);
}
{code}
* Typo in the name {{testMountTableDefalutACL}}
* Internally this fails {{assertEquals(ugi.getShortUserName(), 
mountTable.getGroupName());}}, I think it shouldn't be {{getShortUserName()}} 
to compare with the group.

Other than this, this is ready to go.


was (Author: elgoiri):
Thanks [~linyiqun] for taking care of the comments. A couple more based on 
[^HDFS-12895.005.patch]:
* It seems we are defaulting to nulls when nothing there. The entries created 
early still show without permissions. We probably want to check for empty 
{{getMode()}} at {{MountTableImpl}}. Not sure about user and group; show a 
default one? Super user? This is what I currently get for old entries:
{code}
Mount Table Entries:
SourceDestinations  Owner 
Group Mode
/ ns0->/null  
null  -
/ns0  ns0->/null  
null  -
/ns1  ns1->/null  
null  -
/ns2  ns2->/null  
null  -
/ns3  ns3->/null  
null  -
{code}
* {{removeMountTableEntry()}} could initialize {{deleteEntry}} right away 
without the null: {{final MountTable deleteEntry = 
getDriver().get(getRecordClass(), query);}}
* For consistency, {{removeMountTableEntry()}} could do the same order of 
{{if}} as {{addMountTableEntry()}} and {{updateMountTableEntry()}} and avoid 
the {{pc}} init if no entry. In addition, the check status could use this if 
structure:
{code}
boolean status = false;
if (deleteEntry != null) {
  RouterPermissionChecker pc = RouterAdminServer.getPermissionChecker();
  if (pc != null) {
pc.checkPermission(deleteEntry, FsAction.WRITE);
  }
  status = getDriver().remove(deleteEntry);
}
{code}
* Typo in the name {{testMountTableDefalutACL}}

Other than this, this is ready to go.

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12895.001.patch, HDFS-12895.002.patch, 
> HDFS-12895.003.patch, HDFS-12895.004.patch, HDFS-12895.005.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions 

[jira] [Comment Edited] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289556#comment-16289556
 ] 

Íñigo Goiri edited comment on HDFS-12895 at 12/13/17 7:53 PM:
--

Thanks [~linyiqun] for taking care of the comments. A couple more based on 
[^HDFS-12895.005.patch]:
* It seems we are defaulting to nulls when nothing there. The entries created 
early still show without permissions. We probably want to check for empty 
{{getMode()}} at {{MountTableImpl}}. Not sure about user and group; show a 
default one? Super user? This is what I currently get for old entries:
{code}
Mount Table Entries:
SourceDestinations  Owner 
Group Mode
/ ns0->/null  
null  -
/ns0  ns0->/null  
null  -
/ns1  ns1->/null  
null  -
/ns2  ns2->/null  
null  -
/ns3  ns3->/null  
null  -
{code}
* {{removeMountTableEntry()}} could initialize {{deleteEntry}} right away 
without the null: {{final MountTable deleteEntry = 
getDriver().get(getRecordClass(), query);}}
* For consistency, {{removeMountTableEntry()}} could do the same order of 
{{if}} as {{addMountTableEntry()}} and {{updateMountTableEntry()}} and avoid 
the {{pc}} init if no entry. In addition, the check status could use this if 
structure:
{code}
boolean status = false;
if (deleteEntry != null) {
  RouterPermissionChecker pc = RouterAdminServer.getPermissionChecker();
  if (pc != null) {
pc.checkPermission(deleteEntry, FsAction.WRITE);
  }
  status = getDriver().remove(deleteEntry);
}
{code}
* Typo in the name {{testMountTableDefalutACL}}

Other than this, this is ready to go.


was (Author: elgoiri):
Thanks [~linyiqun] for taking care of the comments. A couple more based on 
[^HDFS-12895.005.patch]:
* It seems we are defaulting to nulls when nothing there. The entries created 
early still show without permissions. We probably want to check for empty 
{{getMode()}} at {{MountTableImpl}}. Not sure about user and group; show a 
default one? Super user? This is what I currently get for old entries:
{code}
Mount Table Entries:
SourceDestinations  Owner 
Group Mode
/ ns0->/null  
null  -
/ns0  ns0->/null  
null  -
/ns1  ns1->/null  
null  -
/ns2  ns2->/null  
null  -
/ns3  ns3->/null  
null  -
{code}
* {{removeMountTableEntry()}} could initialize {{deleteEntry}} right away 
without the null: {{final MountTable deleteEntry = 
getDriver().get(getRecordClass(), query);}}
* For consistency, {{removeMountTableEntry()}} could do the same order of 
{{if}} as {{addMountTableEntry()}} and {{updateMountTableEntry()}} and avoid 
the {{pc}} init if no entry. In addition, the check status could use this if 
structure:
{code}
boolean status = false;
if (deleteEntry != null) {
  RouterPermissionChecker pc = RouterAdminServer.getPermissionChecker();
  if (pc != null) {
pc.checkPermission(deleteEntry, FsAction.WRITE);
  }
  status = getDriver().remove(deleteEntry);
}
{code}
Other than this, this is ready to go.

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12895.001.patch, HDFS-12895.002.patch, 
> HDFS-12895.003.patch, HDFS-12895.004.patch, HDFS-12895.005.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount 

[jira] [Commented] (HDFS-12907) Allow read-only access to reserved raw for non-superusers

2017-12-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289786#comment-16289786
 ] 

genericqa commented on HDFS-12907:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
5s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 101 unchanged - 0 fixed = 108 total (was 101) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}120m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}177m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12907 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901914/HDFS-12907.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 493c01a47cc6 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 10fc8d2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22382/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22382/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Updated] (HDFS-12712) [9806] Code style cleanup

2017-12-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12712:
--
Attachment: (was: HDFS-12712-HDFS-9806.001.patch)

> [9806] Code style cleanup
> -
>
> Key: HDFS-12712
> URL: https://issues.apache.org/jira/browse/HDFS-12712
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Minor
>
> The code for HDFS-9806 could use some style cleaning before merging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12712) [9806] Code style cleanup

2017-12-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12712:
--
Status: Open  (was: Patch Available)

> [9806] Code style cleanup
> -
>
> Key: HDFS-12712
> URL: https://issues.apache.org/jira/browse/HDFS-12712
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Minor
>
> The code for HDFS-9806 could use some style cleaning before merging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12712) [9806] Code style cleanup

2017-12-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12712:
--
Status: Patch Available  (was: Open)

> [9806] Code style cleanup
> -
>
> Key: HDFS-12712
> URL: https://issues.apache.org/jira/browse/HDFS-12712
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Minor
>
> The code for HDFS-9806 could use some style cleaning before merging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12712) [9806] Code style cleanup

2017-12-13 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12712:
--
Attachment: HDFS-12712-HDFS-9806.001.patch

Patch cleaning up code-style issues (along with those reported in [this | 
https://issues.apache.org/jira/browse/HDFS-9806?focusedCommentId=16288834=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16288834]
 jenkins run on HDFS-9806).

> [9806] Code style cleanup
> -
>
> Key: HDFS-12712
> URL: https://issues.apache.org/jira/browse/HDFS-12712
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Virajith Jalaparti
>Priority: Minor
> Attachments: HDFS-12712-HDFS-9806.001.patch
>
>
> The code for HDFS-9806 could use some style cleaning before merging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12000) Ozone: Container : Add key versioning support-1

2017-12-13 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12000:
--
Attachment: HDFS-12000-HDFS-7240.012.patch

v012 patch to rebase.

> Ozone: Container : Add key versioning support-1
> ---
>
> Key: HDFS-12000
> URL: https://issues.apache.org/jira/browse/HDFS-12000
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Chen Liang
>  Labels: OzonePostMerge
> Attachments: HDFS-12000-HDFS-7240.001.patch, 
> HDFS-12000-HDFS-7240.002.patch, HDFS-12000-HDFS-7240.003.patch, 
> HDFS-12000-HDFS-7240.004.patch, HDFS-12000-HDFS-7240.005.patch, 
> HDFS-12000-HDFS-7240.007.patch, HDFS-12000-HDFS-7240.008.patch, 
> HDFS-12000-HDFS-7240.009.patch, HDFS-12000-HDFS-7240.010.patch, 
> HDFS-12000-HDFS-7240.011.patch, HDFS-12000-HDFS-7240.012.patch, 
> OzoneVersion.001.pdf
>
>
> The rest interface of ozone supports versioning of keys. This support comes 
> from the containers and how chunks are managed to support this feature. This 
> JIRA tracks that feature. Will post a detailed design doc so that we can talk 
> about this feature.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12818) Support multiple storages in DataNodeCluster / SimulatedFSDataset

2017-12-13 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289702#comment-16289702
 ] 

Erik Krogen edited comment on HDFS-12818 at 12/13/17 6:48 PM:
--

{quote}
Looks like the test failures on the last run were infrastructure-related; OOM 
due to thread issues and all of them passed successfully on my local.
{quote}
Repeat of this comment from before re: the most recent Jenkins run.


was (Author: xkrogen):
> Looks like the test failures on the last run were infrastructure-related; OOM 
> due to thread issues and all of them passed successfully on my local.
Repeat of this comment from before re: the most recent Jenkins run.

> Support multiple storages in DataNodeCluster / SimulatedFSDataset
> -
>
> Key: HDFS-12818
> URL: https://issues.apache.org/jira/browse/HDFS-12818
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, test
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HDFS-12818.000.patch, HDFS-12818.001.patch, 
> HDFS-12818.002.patch, HDFS-12818.003.patch, HDFS-12818.004.patch, 
> HDFS-12818.005.patch, HDFS-12818.006.patch, HDFS-12818.007.patch
>
>
> Currently {{SimulatedFSDataset}} (and thus, {{DataNodeCluster}} with 
> {{-simulated}}) only supports a single storage per {{DataNode}}. Given that 
> the number of storages can have important implications on the performance of 
> block report processing, it would be useful for these classes to support a 
> multiple storage configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12818) Support multiple storages in DataNodeCluster / SimulatedFSDataset

2017-12-13 Thread Erik Krogen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289702#comment-16289702
 ] 

Erik Krogen commented on HDFS-12818:


> Looks like the test failures on the last run were infrastructure-related; OOM 
> due to thread issues and all of them passed successfully on my local.
Repeat of this comment from before re: the most recent Jenkins run.

> Support multiple storages in DataNodeCluster / SimulatedFSDataset
> -
>
> Key: HDFS-12818
> URL: https://issues.apache.org/jira/browse/HDFS-12818
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, test
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HDFS-12818.000.patch, HDFS-12818.001.patch, 
> HDFS-12818.002.patch, HDFS-12818.003.patch, HDFS-12818.004.patch, 
> HDFS-12818.005.patch, HDFS-12818.006.patch, HDFS-12818.007.patch
>
>
> Currently {{SimulatedFSDataset}} (and thus, {{DataNodeCluster}} with 
> {{-simulated}}) only supports a single storage per {{DataNode}}. Given that 
> the number of storages can have important implications on the performance of 
> block report processing, it would be useful for these classes to support a 
> multiple storage configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12916) HDFS commands throws error, when only shaded clients in classpath

2017-12-13 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289693#comment-16289693
 ] 

Bharat Viswanadham commented on HDFS-12916:
---

Ping [~busbey] again.


> HDFS commands throws error, when only shaded clients in classpath
> -
>
> Key: HDFS-12916
> URL: https://issues.apache.org/jira/browse/HDFS-12916
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> [root@n001 hadoop]# bin/hdfs dfs -rm /
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/htrace/core/Tracer$Builder
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:303)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.fs.FsShell.main(FsShell.java:389)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.htrace.core.Tracer$Builder
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 4 more
> cc [~busbey]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12265) Ozone : better handling of operation fail due to chill mode

2017-12-13 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289691#comment-16289691
 ] 

Chen Liang commented on HDFS-12265:
---

Looks like this has been handled as part of HDFS-12387, close this JIRA.

> Ozone : better handling of operation fail due to chill mode
> ---
>
> Key: HDFS-12265
> URL: https://issues.apache.org/jira/browse/HDFS-12265
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chen Liang
>Priority: Minor
>  Labels: OzonePostMerge
>
> Currently if someone tries to create a container while SCM is in chill mode, 
> there will be exception of INTERNAL_ERROR, which is not very informative and 
> can be confusing for debugging.
> We should make it easier to identify problems caused by chill mode. For 
> example, we may detect if SCM is in chill mode and report back to client in 
> some way, such that the client can backup and try again later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12265) Ozone : better handling of operation fail due to chill mode

2017-12-13 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang resolved HDFS-12265.
---
  Resolution: Fixed
Release Note: Looks like this has been handled as part of HDFS-12387, close 
this JIRA.

> Ozone : better handling of operation fail due to chill mode
> ---
>
> Key: HDFS-12265
> URL: https://issues.apache.org/jira/browse/HDFS-12265
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chen Liang
>Priority: Minor
>  Labels: OzonePostMerge
>
> Currently if someone tries to create a container while SCM is in chill mode, 
> there will be exception of INTERNAL_ERROR, which is not very informative and 
> can be confusing for debugging.
> We should make it easier to identify problems caused by chill mode. For 
> example, we may detect if SCM is in chill mode and report back to client in 
> some way, such that the client can backup and try again later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12265) Ozone : better handling of operation fail due to chill mode

2017-12-13 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12265:
--
Release Note:   (was: Looks like this has been handled as part of 
HDFS-12387, close this JIRA.)

> Ozone : better handling of operation fail due to chill mode
> ---
>
> Key: HDFS-12265
> URL: https://issues.apache.org/jira/browse/HDFS-12265
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Chen Liang
>Priority: Minor
>  Labels: OzonePostMerge
>
> Currently if someone tries to create a container while SCM is in chill mode, 
> there will be exception of INTERNAL_ERROR, which is not very informative and 
> can be confusing for debugging.
> We should make it easier to identify problems caused by chill mode. For 
> example, we may detect if SCM is in chill mode and report back to client in 
> some way, such that the client can backup and try again later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12751) Ozone: SCM: update container allocated size to container db for all the open containers in ContainerStateManager#close

2017-12-13 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12751:
--
Attachment: HDFS-12751-HDFS-7240.002.patch

Fix checkstyle, the failed tests are unrelated, and passed in local run

> Ozone: SCM: update container allocated size to container db for all the open 
> containers in ContainerStateManager#close
> --
>
> Key: HDFS-12751
> URL: https://issues.apache.org/jira/browse/HDFS-12751
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Chen Liang
> Attachments: HDFS-12751-HDFS-7240.001.patch, 
> HDFS-12751-HDFS-7240.002.patch
>
>
> Container allocated size is maintained in memory by 
> {{ContainerStateManager}}, this has to be updated in container db when we 
> shutdown SCM. {{ContainerStateManager#close}} will be called during SCM 
> shutdown, so updating allocated size for all the open containers should be 
> done here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12911) [SPS]: Fix review comments from discussions in HDFS-10285

2017-12-13 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-12911:
---
Description: 
This is the JIRA for tracking the possible improvements or issues discussed in 
main JIRA

So far comments to handle
Daryn:
 # Lock should not kept while executing placement policy.
 # While starting up the NN, SPS Xattrs checks happen even if feature disabled. 
This could potentially impact the startup speed. 

UMA:
# I am adding one more possible improvement to reduce Xattr objects 
significantly.
 SPS Xattr is constant object. So, we create one Xattr deduplication object 
once statically and use the same object reference when required to add SPS 
Xattr to Inode. So, here additional bytes required for storing SPS Xattr would 
turn to same as single object ref ( i.e 4 bytes in 32 bit). So Xattr overhead 
should come down significantly IMO. Lets explore the feasibility on this option.
Xattr list Future will not be specially created for SPS, that list would have 
been created by SetStoragePolicy already on the same directory. So, no extra 
Feature creation because of SPS alone.
# Currently SPS putting long id objects in Q for tracking SPS called Inodes. 
So, it is additional created and size of it would be (obj ref + value) = (8 + 
8) bytes [ ignoring alignment for time being]
So, the possible improvement here is, instead of creating new Long obj, we can 
keep existing inode object for tracking. Advantage is, Inode object already 
maintained in NN, so no new object creation is needed. So, we just need to 
maintain one obj ref. Above two points should significantly reduce the memory 
requirements of SPS. So, for SPS call: 8bytes for called inode tracking + 8 
bytes for Xattr ref.
# Use LightWeightLinkedSet instead of using LinkedList for from Q. This will 
reduce unnecessary Node creations inside LinkedList. 



  was:
This is the JIRA for tracking the possible improvements or issues discussed in 
main JIRA

So far comments to handle
Daryn:
 # Lock should not kept while executing placement policy.
 # While starting up the NN, SPS Xattrs checks happen even if feature disabled. 
This could potentially impact the startup speed. 

UMA:
# I am adding one more possible improvement to reduce Xattr objects 
significantly.
 SPS Xattr is constant object. So, we create one Xattr deduplication object 
once statically and use the same object reference when required to add SPS 
Xattr to Inode. So, here additional bytes required for storing SPS Xattr would 
turn to same as single object ref ( i.e 4 bytes in 32 bit). So Xattr overhead 
should come down significantly IMO. Lets explore the feasibility on this option.
Xattr list Future will not be specially created for SPS, that list would have 
been created by SetStoragePolicy already on the same directory. So, no extra 
Feature creation because of SPS alone.
# Currently SPS putting long id objects in Q for tracking SPS called Inodes. 
So, it is additional created and size of it would be (obj ref + value) = (8 + 
8) bytes [ ignoring alignment for time being]
So, the possible improvement here is, instead of creating new Long obj, we can 
keep existing inode object for tracking. Advantage is, Inode object already 
maintained in NN, so no new object creation is needed. So, we just need to 
maintain one obj ref. Above two points should significantly reduce the memory 
requirements of SPS. So, for SPS call: 8bytes for called inode tracking + 8 
bytes for Xattr ref.




> [SPS]: Fix review comments from discussions in HDFS-10285
> -
>
> Key: HDFS-12911
> URL: https://issues.apache.org/jira/browse/HDFS-12911
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Uma Maheswara Rao G
>Assignee: Rakesh R
>
> This is the JIRA for tracking the possible improvements or issues discussed 
> in main JIRA
> So far comments to handle
> Daryn:
>  # Lock should not kept while executing placement policy.
>  # While starting up the NN, SPS Xattrs checks happen even if feature 
> disabled. This could potentially impact the startup speed. 
> UMA:
> # I am adding one more possible improvement to reduce Xattr objects 
> significantly.
>  SPS Xattr is constant object. So, we create one Xattr deduplication object 
> once statically and use the same object reference when required to add SPS 
> Xattr to Inode. So, here additional bytes required for storing SPS Xattr 
> would turn to same as single object ref ( i.e 4 bytes in 32 bit). So Xattr 
> overhead should come down significantly IMO. Lets explore the feasibility on 
> this option.
> Xattr list Future will not be specially created for SPS, that list would have 
> been created by SetStoragePolicy already on the same directory. So, no extra 
> Feature 

[jira] [Commented] (HDFS-12741) ADD support for KSM --createObjectStore command

2017-12-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289587#comment-16289587
 ] 

genericqa commented on HDFS-12741:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
15s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
12s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 40s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}189m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.ksm.TestKeySpaceManager |
|   | hadoop.ozone.TestStorageContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12741 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901898/HDFS-12741-HDFS-7240.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c10cf32170a5 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-7240 / 070ad84 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22381/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 

[jira] [Commented] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-13 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289556#comment-16289556
 ] 

Íñigo Goiri commented on HDFS-12895:


Thanks [~linyiqun] for taking care of the comments. A couple more based on 
[^HDFS-12895.005.patch]:
* It seems we are defaulting to nulls when nothing there. The entries created 
early still show without permissions. We probably want to check for empty 
{{getMode()}} at {{MountTableImpl}}. Not sure about user and group; show a 
default one? Super user? This is what I currently get for old entries:
{code}
Mount Table Entries:
SourceDestinations  Owner 
Group Mode
/ ns0->/null  
null  -
/ns0  ns0->/null  
null  -
/ns1  ns1->/null  
null  -
/ns2  ns2->/null  
null  -
/ns3  ns3->/null  
null  -
{code}
* {{removeMountTableEntry()}} could initialize {{deleteEntry}} right away 
without the null: {{final MountTable deleteEntry = 
getDriver().get(getRecordClass(), query);}}
* For consistency, {{removeMountTableEntry()}} could do the same order of 
{{if}} as {{addMountTableEntry()}} and {{updateMountTableEntry()}} and avoid 
the {{pc}} init if no entry. In addition, the check status could use this if 
structure:
{code}
boolean status = false;
if (deleteEntry != null) {
  RouterPermissionChecker pc = RouterAdminServer.getPermissionChecker();
  if (pc != null) {
pc.checkPermission(deleteEntry, FsAction.WRITE);
  }
  status = getDriver().remove(deleteEntry);
}
{code}
Other than this, this is ready to go.

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12895.001.patch, HDFS-12895.002.patch, 
> HDFS-12895.003.patch, HDFS-12895.004.patch, HDFS-12895.005.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12908) Ozone: write chunk call fails because of Metrics registry exception

2017-12-13 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12908:
-
Attachment: HDFS-12908-HDFS-7240.002.patch

Thanks for the offline discussion [~nandakumar131].

I have changed the container db file name to incorporate containername as 
discussed. Please have a look.

> Ozone: write chunk call fails because of Metrics registry exception
> ---
>
> Key: HDFS-12908
> URL: https://issues.apache.org/jira/browse/HDFS-12908
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12908-HDFS-7240.001.patch, 
> HDFS-12908-HDFS-7240.002.patch
>
>
> write chunk call fail because of Metric registration exception.
> {code}
> 2017-12-08 04:02:19,894 WARN org.apache.hadoop.metrics2.util.MBeans: Error 
> creating MBean object name: 
> Hadoop:service=Ozone,name=RocksDbStore,dbName=container.db
> org.apache.hadoop.metrics2.MetricsException: 
> org.apache.hadoop.metrics2.MetricsException: 
> Hadoop:service=Ozone,name=RocksDbStore,dbName=container.db already exists!
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newObjectName(DefaultMetricsSystem.java:135)
> at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newMBeanName(DefaultMetricsSystem.java:110)
> at 
> org.apache.hadoop.metrics2.util.MBeans.getMBeanName(MBeans.java:155)
> at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:87)
> at org.apache.hadoop.utils.RocksDBStore.(RocksDBStore.java:77)
> at 
> org.apache.hadoop.utils.MetadataStoreBuilder.build(MetadataStoreBuilder.java:115)
> at 
> org.apache.hadoop.ozone.container.common.utils.ContainerCache.getDB(ContainerCache.java:138)
> at 
> org.apache.hadoop.ozone.container.common.helpers.KeyUtils.getDB(KeyUtils.java:65)
> at 
> org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.readContainerInfo(ContainerManagerImpl.java:261)
> at 
> org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.createContainer(ContainerManagerImpl.java:330)
> at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.handleCreateContainer(Dispatcher.java:399)
> at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.containerProcessHandler(Dispatcher.java:158)
> at 
> org.apache.hadoop.ozone.container.common.impl.Dispatcher.dispatch(Dispatcher.java:105)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.XceiverServerHandler.channelRead0(XceiverServerHandler.java:61)
> at 
> org.apache.hadoop.ozone.container.common.transport.server.XceiverServerHandler.channelRead0(XceiverServerHandler.java:32)
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
> at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:312)
> at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:286)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
> at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
> at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1302)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
> at 
> 

[jira] [Updated] (HDFS-12907) Allow read-only access to reserved raw for non-superusers

2017-12-13 Thread Rushabh S Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh S Shah updated HDFS-12907:
--
Attachment: HDFS-12907.004.patch

Feeling bad to waste jenkins resources for fixing checkstyle warnings.
Diff with the previous patch.
{noformat}
 diff ~/patches/jira/HDFS-12907.003.patch  ~/patches/jira/HDFS-12907.004.patch 
117c117
< index 8aa5dc9b92d..834e570b291 100644
---
> index 8aa5dc9b92d..43eeadf0e50 100644
230c230
< +  .createUserForTesting("fakeUser", new String[] { "fakeGroup" });
---
> +  .createUserForTesting("fakeUser", new String[] {"fakeGroup"});
{noformat}

> Allow read-only access to reserved raw for non-superusers
> -
>
> Key: HDFS-12907
> URL: https://issues.apache.org/jira/browse/HDFS-12907
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Rushabh S Shah
> Attachments: HDFS-12907.001.patch, HDFS-12907.002.patch, 
> HDFS-12907.003.patch, HDFS-12907.004.patch, HDFS-12907.patch
>
>
> HDFS-6509 added a special /.reserved/raw path prefix to access the raw file 
> contents of EZ files.  In the simplest sense it doesn't return the FE info in 
> the {{LocatedBlocks}} so the dfs client doesn't try to decrypt the data.  
> This facilitates allowing tools like distcp to copy raw bytes.
> Access to the raw hierarchy is restricted to superusers.  This seems like an 
> overly broad restriction designed to prevent non-admins from munging the EZ 
> related xattrs.  I believe we should relax the restriction to allow 
> non-admins to perform read-only operations.  Allowing non-superusers to 
> easily read the raw bytes will be extremely useful for regular users, esp. 
> for enabling webhdfs client-side encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12907) Allow read-only access to reserved raw for non-superusers

2017-12-13 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289490#comment-16289490
 ] 

Rushabh S Shah commented on HDFS-12907:
---

All of the test failures are due to {{unable to create new native thread}} 
except the ones listed below.
1. TestUnderReplicatedBlocks#testSetRepIncWithUnderReplicatedBlocks: The test 
timed out. Tracked by HDFS-9243
2. TestDecommissioningStatus#testDecommissionLosingData: It failed with 
following error message.
{noformat}
Problem binding to [localhost:60134] java.net.BindException: Address already in 
use; For more details see:  http://wiki.apache.org/hadoop/BindException
{noformat}

It passes locally on my machine.
{noformat}
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.331 s 
- in org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatus
{noformat}

Regarding checkstyle issues.
6 of 8 warnings are due to identation of switch statements.
Will fix the remaining 2 in next revision.

As always, findbugs warning is not related to the patch.
Tracked by HDFS-12915

[~daryn]: please review. This patch is blocking HDFS-12574.

> Allow read-only access to reserved raw for non-superusers
> -
>
> Key: HDFS-12907
> URL: https://issues.apache.org/jira/browse/HDFS-12907
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Daryn Sharp
>Assignee: Rushabh S Shah
> Attachments: HDFS-12907.001.patch, HDFS-12907.002.patch, 
> HDFS-12907.003.patch, HDFS-12907.patch
>
>
> HDFS-6509 added a special /.reserved/raw path prefix to access the raw file 
> contents of EZ files.  In the simplest sense it doesn't return the FE info in 
> the {{LocatedBlocks}} so the dfs client doesn't try to decrypt the data.  
> This facilitates allowing tools like distcp to copy raw bytes.
> Access to the raw hierarchy is restricted to superusers.  This seems like an 
> overly broad restriction designed to prevent non-admins from munging the EZ 
> related xattrs.  I believe we should relax the restriction to allow 
> non-admins to perform read-only operations.  Allowing non-superusers to 
> easily read the raw bytes will be extremely useful for regular users, esp. 
> for enabling webhdfs client-side encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289355#comment-16289355
 ] 

genericqa commented on HDFS-12895:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
50s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}117m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}168m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12895 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901886/HDFS-12895.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 1e24a8422047 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 880cd75 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22380/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22380/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Updated] (HDFS-12741) ADD support for KSM --createObjectStore command

2017-12-13 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-12741:
---
Attachment: HDFS-12741-HDFS-7240.006.patch

Thanks [~nandakumar131], for the review comments. Patch v6 addresses the review 
comments and checkstyle issues. Please have a look.

> ADD support for KSM --createObjectStore command
> ---
>
> Key: HDFS-12741
> URL: https://issues.apache.org/jira/browse/HDFS-12741
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12741-HDFS-7240.001.patch, 
> HDFS-12741-HDFS-7240.002.patch, HDFS-12741-HDFS-7240.003.patch, 
> HDFS-12741-HDFS-7240.004.patch, HDFS-12741-HDFS-7240.005.patch, 
> HDFS-12741-HDFS-7240.006.patch
>
>
> KSM --createObjectStore command reads the ozone configuration information and 
> creates the KSM version file and reads the SCM version file from the SCM 
> specified.
>   
> The SCM version file is stored in the KSM metadata directory and before 
> communicating with an SCM KSM verifies that it is communicating with an SCM 
> where the relationship has been established via createObjectStore command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12741) ADD support for KSM --createObjectStore command

2017-12-13 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289271#comment-16289271
 ] 

Nanda kumar commented on HDFS-12741:


Thanks [~shashikant] for updating the patch, it looks pretty good to me.


Some minor NITs

{{getOzoneMetadirPath}} can be renamed to {{getOzoneMetaDirPath}}

*KeySpaceManager*

{{if (!DFSUtil.isOzoneEnabled(conf))}} is done twice, inside {{main}} and in 
{{createKSM}}. The one inside {{main}} can be removed.

Line:278 {{System.out.println}} should be {{System.err.println}}

Line:310 If {{clusterId}} is missing we should throw exception. In current 
patch we are continuing with writing the version file without {{clusterId}} 

Line:313 Same as {{clusterId}}, we should throw error if {{scmId}} is missing. 


> ADD support for KSM --createObjectStore command
> ---
>
> Key: HDFS-12741
> URL: https://issues.apache.org/jira/browse/HDFS-12741
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12741-HDFS-7240.001.patch, 
> HDFS-12741-HDFS-7240.002.patch, HDFS-12741-HDFS-7240.003.patch, 
> HDFS-12741-HDFS-7240.004.patch, HDFS-12741-HDFS-7240.005.patch
>
>
> KSM --createObjectStore command reads the ozone configuration information and 
> creates the KSM version file and reads the SCM version file from the SCM 
> specified.
>   
> The SCM version file is stored in the KSM metadata directory and before 
> communicating with an SCM KSM verifies that it is communicating with an SCM 
> where the relationship has been established via createObjectStore command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12741) ADD support for KSM --createObjectStore command

2017-12-13 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289270#comment-16289270
 ] 

genericqa commented on HDFS-12741:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
11s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
57s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 6 new + 0 unchanged - 1 fixed = 6 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}134m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}187m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.ozone.web.client.TestKeysRatis |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.ozone.ksm.TestKeySpaceManager |
|   | hadoop.cblock.TestCBlockReadWrite |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12741 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12901872/HDFS-12741-HDFS-7240.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 31f1f4efc501 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |

[jira] [Updated] (HDFS-12895) RBF: Add ACL support for mount table

2017-12-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12895:
-
Attachment: HDFS-12895.005.patch

Attach new patch for fixing checkstyle, javadoc warnings.

> RBF: Add ACL support for mount table
> 
>
> Key: HDFS-12895
> URL: https://issues.apache.org/jira/browse/HDFS-12895
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: RBF
> Attachments: HDFS-12895.001.patch, HDFS-12895.002.patch, 
> HDFS-12895.003.patch, HDFS-12895.004.patch, HDFS-12895.005.patch
>
>
> Adding ACL support for the Mount Table management. Following is the initial 
> design of ACL control for the mount table management.
> Each mount table has its owner, group name and permission.
> The mount table permissions (FsPermission), here we use 
> {{org.apache.hadoop.fs.permission.FsPermission}} to do the access check:
> # READ permission: you can read the mount table info.
> # WRITE permission: you can add remove or update this mount table info.
> # EXECUTE permission: This won't be used.
> The add command of mount table will be extended like this
> {noformat}
> $HADOOP_HOME/bin/hdfs dfsrouteradmin [-add   
>  [-owner ] [-group ] [-mode ]]
> {noformat}
> * is UNIX-style permissions for the mount table. Permissions are 
> specified in octal, e.g. 0755. By default, this is set to 0755*.
> If we want update the ACL info of specfied mount table, just execute add 
> command again. This command not only adding for new mount talle but also 
> updating mount table once it finds given mount table is existed. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >