[jira] [Commented] (HDFS-13214) RBF: Complete document of Router configuration

2018-03-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16389198#comment-16389198
 ] 

Hudson commented on HDFS-13214:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13785 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13785/])
HDFS-13214. RBF: Complete document of Router configuration. Contributed (yqlin: 
rev 58ea2d7a65ccd8b7775021bae1d24b9e5561e67b)
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterNamenodeMonitoring.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSRouterFederation.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java


> RBF: Complete document of Router configuration
> --
>
> Key: HDFS-13214
> URL: https://issues.apache.org/jira/browse/HDFS-13214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Tao Jie
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDFS-13214.001.patch, HDFS-13214.002.patch, 
> HDFS-13214.003.patch, HDFS-13214.004.patch
>
>
> In a typical router-based federation cluster, hdfs-site.xml is supposed to be:
> {code}
> 
> dfs.nameservices
> ns1,ns2,ns-fed
>   
>   
> dfs.ha.namenodes.ns-fed
> r1,r2
>   
>   
> dfs.namenode.rpc-address.ns1
> host1:8020
>   
>   
> dfs.namenode.rpc-address.ns2
> host2:8020
>   
>   
> dfs.namenode.rpc-address.ns-fed.r1
> host1:
>   
>   
> dfs.namenode.rpc-address.ns-fed.r2
> host2:
>   
> {code}
> {{dfs.ha.namenodes.ns-fed}} here is used for client to access the Router. 
> However with this configuration on server node, Router fails to start with 
> error:
> {code}
> org.apache.hadoop.HadoopIllegalArgumentException: Configuration has multiple 
> addresses that match local node's address. Please configure the system with 
> dfs.nameservice.id and dfs.ha.namenode.id
> at org.apache.hadoop.hdfs.DFSUtil.getSuffixIDs(DFSUtil.java:1198)
> at org.apache.hadoop.hdfs.DFSUtil.getNameServiceId(DFSUtil.java:1131)
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNamenodeNameServiceId(DFSUtil.java:1086)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createLocalNamenodeHearbeatService(Router.java:466)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createNamenodeHearbeatServices(Router.java:423)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:199)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter.main(DFSRouter.java:69)
> 2018-03-01 18:05:56,208 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter: Failed to start 
> router
> {code}
> Then the router tries to find the local namenode, multiple properties: 
> {{dfs.namenode.rpc-address.ns1}}, {{dfs.namenode.rpc-address.ns-fed.r1}} 
> match the local address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13214) RBF: Complete document of Router configuration

2018-03-06 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16389162#comment-16389162
 ] 

Yiqun Lin commented on HDFS-13214:
--

Thanks, [~Tao Jie]. committing...

> RBF: Complete document of Router configuration
> --
>
> Key: HDFS-13214
> URL: https://issues.apache.org/jira/browse/HDFS-13214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Tao Jie
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDFS-13214.001.patch, HDFS-13214.002.patch, 
> HDFS-13214.003.patch, HDFS-13214.004.patch
>
>
> In a typical router-based federation cluster, hdfs-site.xml is supposed to be:
> {code}
> 
> dfs.nameservices
> ns1,ns2,ns-fed
>   
>   
> dfs.ha.namenodes.ns-fed
> r1,r2
>   
>   
> dfs.namenode.rpc-address.ns1
> host1:8020
>   
>   
> dfs.namenode.rpc-address.ns2
> host2:8020
>   
>   
> dfs.namenode.rpc-address.ns-fed.r1
> host1:
>   
>   
> dfs.namenode.rpc-address.ns-fed.r2
> host2:
>   
> {code}
> {{dfs.ha.namenodes.ns-fed}} here is used for client to access the Router. 
> However with this configuration on server node, Router fails to start with 
> error:
> {code}
> org.apache.hadoop.HadoopIllegalArgumentException: Configuration has multiple 
> addresses that match local node's address. Please configure the system with 
> dfs.nameservice.id and dfs.ha.namenode.id
> at org.apache.hadoop.hdfs.DFSUtil.getSuffixIDs(DFSUtil.java:1198)
> at org.apache.hadoop.hdfs.DFSUtil.getNameServiceId(DFSUtil.java:1131)
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNamenodeNameServiceId(DFSUtil.java:1086)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createLocalNamenodeHearbeatService(Router.java:466)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createNamenodeHearbeatServices(Router.java:423)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:199)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter.main(DFSRouter.java:69)
> 2018-03-01 18:05:56,208 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter: Failed to start 
> router
> {code}
> Then the router tries to find the local namenode, multiple properties: 
> {{dfs.namenode.rpc-address.ns1}}, {{dfs.namenode.rpc-address.ns-fed.r1}} 
> match the local address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13214) RBF: Complete document of Router configuration

2018-03-06 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13214:
-
Summary: RBF: Complete document of Router configuration  (was: RBF: 
Configuration on Router conflicts with client side configuration)

> RBF: Complete document of Router configuration
> --
>
> Key: HDFS-13214
> URL: https://issues.apache.org/jira/browse/HDFS-13214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Tao Jie
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDFS-13214.001.patch, HDFS-13214.002.patch, 
> HDFS-13214.003.patch, HDFS-13214.004.patch
>
>
> In a typical router-based federation cluster, hdfs-site.xml is supposed to be:
> {code}
> 
> dfs.nameservices
> ns1,ns2,ns-fed
>   
>   
> dfs.ha.namenodes.ns-fed
> r1,r2
>   
>   
> dfs.namenode.rpc-address.ns1
> host1:8020
>   
>   
> dfs.namenode.rpc-address.ns2
> host2:8020
>   
>   
> dfs.namenode.rpc-address.ns-fed.r1
> host1:
>   
>   
> dfs.namenode.rpc-address.ns-fed.r2
> host2:
>   
> {code}
> {{dfs.ha.namenodes.ns-fed}} here is used for client to access the Router. 
> However with this configuration on server node, Router fails to start with 
> error:
> {code}
> org.apache.hadoop.HadoopIllegalArgumentException: Configuration has multiple 
> addresses that match local node's address. Please configure the system with 
> dfs.nameservice.id and dfs.ha.namenode.id
> at org.apache.hadoop.hdfs.DFSUtil.getSuffixIDs(DFSUtil.java:1198)
> at org.apache.hadoop.hdfs.DFSUtil.getNameServiceId(DFSUtil.java:1131)
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNamenodeNameServiceId(DFSUtil.java:1086)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createLocalNamenodeHearbeatService(Router.java:466)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createNamenodeHearbeatServices(Router.java:423)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:199)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter.main(DFSRouter.java:69)
> 2018-03-01 18:05:56,208 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter: Failed to start 
> router
> {code}
> Then the router tries to find the local namenode, multiple properties: 
> {{dfs.namenode.rpc-address.ns1}}, {{dfs.namenode.rpc-address.ns-fed.r1}} 
> match the local address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue

2018-03-06 Thread Weiwei Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16389132#comment-16389132
 ] 

Weiwei Wu commented on HDFS-13212:
--

Step 1: visit the PATH-A that not in the mount point

In method MountTableResolver.LookupLocation, if a input source path (PATH-A) 
can not find a match mount point, it will return a default 
location(DEFAULT-LOCATION) with null sourcePath (below code line 393), and add 
it to the locationCache.
{code:java}
382public PathLocation lookupLocation(final String path) {
383  PathLocation ret = null;
384  MountTable entry = findDeepest(path);
385  if (entry != null) {
386ret = buildLocation(path, entry);
387  } else {
388// Not found, use default location
389RemoteLocation remoteLocation =
390new RemoteLocation(defaultNameService, path);
391List locations =
392Collections.singletonList(remoteLocation);
393ret = new PathLocation(null, locations); // a location with null 
sourcePath
394  }
395  return ret;
396}
{code}
 

Step 2: add the PATH-A mount point

when add a  PATH-A mount point, router need to invalid the pre default location 
cache, otherwise the new add mount point will never work because locationCache 
will alway return the DEFAULT-LOCATION.

invalidateLocationCache will lookup all locationCache the find the match 
sourcePath, so it will cause a  null pointer exception in below code line 241.

 
{code:java}
227private void invalidateLocationCache(final String path) {
228  LOG.debug("Invalidating {} from {}", path, locationCache);
229  if (locationCache.size() == 0) {
230return;
231  }
232
233  // Go through the entries and remove the ones from the path to 
invalidate
234  ConcurrentMap map = locationCache.asMap();
235  Set> entries = map.entrySet();
236  Iterator> it = entries.iterator();
237  while (it.hasNext()) {
238Entry entry = it.next();
239PathLocation loc = entry.getValue();
240   String src = loc.getSourcePath();
241   if (src.startsWith(path)) {
242 LOG.debug("Removing {}", src);
243 it.remove();
244   }
245 }
246   
247 LOG.debug("Location cache after invalidation: {}", locationCache);
248   }
{code}
 

 

This case is tested in below test code 
{code:java}
+// Add the default location to location cache
+mountTable.getDestinationForPath("/testlocationcache");
+
+// Add the entry again but mount to another ns
+Map map3 = getMountTableEntry("3", "/testlocationcache");
+MountTable entry3 = MountTable.newInstance("/testlocationcache", map3);
+entries.add(entry3);
+mountTable.refreshEntries(entries);
+
+// Ensure location cache update correctly
+assertEquals("3->/testlocationcache/",
+mountTable.getDestinationForPath("/testlocationcache").toString());
{code}
 

 

> RBF: Fix router location cache issue
> 
>
> Key: HDFS-13212
> URL: https://issues.apache.org/jira/browse/HDFS-13212
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Reporter: Weiwei Wu
>Priority: Major
> Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, 
> HDFS-13212-003.patch, HDFS-13212-004.patch, HDFS-13212-005.patch
>
>
> The MountTableResolver refreshEntries function have a bug when add a new 
> mount table entry which already have location cache. The old location cache 
> will never be invalid until this mount point change again.
> Need to invalid the location cache when add the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11600) Refactor TestDFSStripedOutputStreamWithFailure test classes

2018-03-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16389121#comment-16389121
 ] 

genericqa commented on HDFS-11600:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 26 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 49s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 3 new + 400 unchanged - 
3 fixed = 403 total (was 403) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 11 new + 0 unchanged - 0 fixed = 11 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-11600 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12910082/HDFS-11600.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7a3a8ef7d904 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / edf9445 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23327/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 

[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue

2018-03-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16389069#comment-16389069
 ] 

genericqa commented on HDFS-13212:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}122m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestStartup |
|   | hadoop.hdfs.server.federation.router.TestRouterQuota |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13212 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913314/HDFS-13212-005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fc4b7838920a 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 346caa2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23326/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |

[jira] [Comment Edited] (HDFS-13214) RBF: Configuration on Router conflicts with client side configuration

2018-03-06 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16389026#comment-16389026
 ] 

Tao Jie edited comment on HDFS-13214 at 3/7/18 4:56 AM:


Sorry for replying late and thank you [~linyiqun] [~elgoiri] for working on 
this JIRA.
It clear to me now:)
Some other minor suggestion about the document:
1, Rebalancing data across subclusters mentioned in the document of 
2.9.0/3.0.0GA is not ready today, right? We'd better avoid misleading users 
when the function is not available (I have tried to find out the way of 
rebalancing for a while :) ). 
2, The diagram of the diagram of Architecture implies that the subclusters are 
independent HDFS clusters. Actually subclusters could also be federation 
cluster or a mixed cluster with federation and independent cluster. We could 
mention it explicitly in the document.
I am ok to handle this in another jira.
+1 for the current patch.


was (Author: tao jie):
Sorry for replying late and thank you [~linyiqun] [~elgoiri] for working on 
this JIRA.
It clear to me now:)
Some other minor suggestion about the document:
1, Rebalancing data across subclusters mentioned in the document of 
2.9.0/3.0.0GA is not ready today, right? We'd better avoid misleading users 
when the function is not available (I have tried to find out the way of 
rebalancing for a while :) ). 
2, The diagram of the diagram of Architecture implies that the subclusters are 
independent HDFS clusters. Actually subclusters could also be federation 
cluster or a mixed cluster with federation and independent cluster. We could 
mention it explicitly in the document.
I'am ok to handle this in another jira.
+1 for the current patch.

> RBF: Configuration on Router conflicts with client side configuration
> -
>
> Key: HDFS-13214
> URL: https://issues.apache.org/jira/browse/HDFS-13214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Tao Jie
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDFS-13214.001.patch, HDFS-13214.002.patch, 
> HDFS-13214.003.patch, HDFS-13214.004.patch
>
>
> In a typical router-based federation cluster, hdfs-site.xml is supposed to be:
> {code}
> 
> dfs.nameservices
> ns1,ns2,ns-fed
>   
>   
> dfs.ha.namenodes.ns-fed
> r1,r2
>   
>   
> dfs.namenode.rpc-address.ns1
> host1:8020
>   
>   
> dfs.namenode.rpc-address.ns2
> host2:8020
>   
>   
> dfs.namenode.rpc-address.ns-fed.r1
> host1:
>   
>   
> dfs.namenode.rpc-address.ns-fed.r2
> host2:
>   
> {code}
> {{dfs.ha.namenodes.ns-fed}} here is used for client to access the Router. 
> However with this configuration on server node, Router fails to start with 
> error:
> {code}
> org.apache.hadoop.HadoopIllegalArgumentException: Configuration has multiple 
> addresses that match local node's address. Please configure the system with 
> dfs.nameservice.id and dfs.ha.namenode.id
> at org.apache.hadoop.hdfs.DFSUtil.getSuffixIDs(DFSUtil.java:1198)
> at org.apache.hadoop.hdfs.DFSUtil.getNameServiceId(DFSUtil.java:1131)
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNamenodeNameServiceId(DFSUtil.java:1086)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createLocalNamenodeHearbeatService(Router.java:466)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createNamenodeHearbeatServices(Router.java:423)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:199)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter.main(DFSRouter.java:69)
> 2018-03-01 18:05:56,208 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter: Failed to start 
> router
> {code}
> Then the router tries to find the local namenode, multiple properties: 
> {{dfs.namenode.rpc-address.ns1}}, {{dfs.namenode.rpc-address.ns-fed.r1}} 
> match the local address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13214) RBF: Configuration on Router conflicts with client side configuration

2018-03-06 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16389026#comment-16389026
 ] 

Tao Jie commented on HDFS-13214:


Sorry for replying late and thank you [~linyiqun] [~elgoiri] for working on 
this JIRA.
It clear to me now:)
Some other minor suggestion about the document:
1, Rebalancing data across subclusters mentioned in the document of 
2.9.0/3.0.0GA is not ready today, right? We'd better avoid misleading users 
when the function is not available (I have tried to find out the way of 
rebalancing for a while :) ). 
2, The diagram of the diagram of Architecture implies that the subclusters are 
independent HDFS clusters. Actually subclusters could also be federation 
cluster or a mixed cluster with federation and independent cluster. We could 
mention it explicitly in the document.
I'am ok to handle this in another jira.
+1 for the current patch.

> RBF: Configuration on Router conflicts with client side configuration
> -
>
> Key: HDFS-13214
> URL: https://issues.apache.org/jira/browse/HDFS-13214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Tao Jie
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDFS-13214.001.patch, HDFS-13214.002.patch, 
> HDFS-13214.003.patch, HDFS-13214.004.patch
>
>
> In a typical router-based federation cluster, hdfs-site.xml is supposed to be:
> {code}
> 
> dfs.nameservices
> ns1,ns2,ns-fed
>   
>   
> dfs.ha.namenodes.ns-fed
> r1,r2
>   
>   
> dfs.namenode.rpc-address.ns1
> host1:8020
>   
>   
> dfs.namenode.rpc-address.ns2
> host2:8020
>   
>   
> dfs.namenode.rpc-address.ns-fed.r1
> host1:
>   
>   
> dfs.namenode.rpc-address.ns-fed.r2
> host2:
>   
> {code}
> {{dfs.ha.namenodes.ns-fed}} here is used for client to access the Router. 
> However with this configuration on server node, Router fails to start with 
> error:
> {code}
> org.apache.hadoop.HadoopIllegalArgumentException: Configuration has multiple 
> addresses that match local node's address. Please configure the system with 
> dfs.nameservice.id and dfs.ha.namenode.id
> at org.apache.hadoop.hdfs.DFSUtil.getSuffixIDs(DFSUtil.java:1198)
> at org.apache.hadoop.hdfs.DFSUtil.getNameServiceId(DFSUtil.java:1131)
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNamenodeNameServiceId(DFSUtil.java:1086)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createLocalNamenodeHearbeatService(Router.java:466)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createNamenodeHearbeatServices(Router.java:423)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:199)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter.main(DFSRouter.java:69)
> 2018-03-01 18:05:56,208 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter: Failed to start 
> router
> {code}
> Then the router tries to find the local namenode, multiple properties: 
> {{dfs.namenode.rpc-address.ns1}}, {{dfs.namenode.rpc-address.ns-fed.r1}} 
> match the local address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13109) Support fully qualified hdfs path in EZ commands

2018-03-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388964#comment-16388964
 ] 

Hudson commented on HDFS-13109:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13784 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13784/])
HDFS-13109. Support fully qualified hdfs path in EZ commands. (xyao: rev 
edf9445708ffb7a9e59cb933e049b540f99add1e)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsAdmin.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java


> Support fully qualified hdfs path in EZ commands
> 
>
> Key: HDFS-13109
> URL: https://issues.apache.org/jira/browse/HDFS-13109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 3.0.2, 3.2.0
>
> Attachments: HDFS-13109.001.patch, HDFS-13109.002.patch, 
> HDFS-13109.003.patch, HDFS-13109.004.patch, HDFS-13109.005.patch, 
> HDFS-13109.006.patch
>
>
> When creating an Encryption Zone, if the fully qualified path is specified in 
> the path argument, it throws the following error.
> {code:java}
> ~$ hdfs crypto -createZone -keyName mykey1 -path hdfs://ns1/zone1
> IllegalArgumentException: hdfs://ns1/zone1 is not the root of an encryption 
> zone. Do you mean /zone1?
> ~$ hdfs crypto -createZone -keyName mykey1 -path "hdfs://namenode:9000/zone2" 
> IllegalArgumentException: hdfs://namenode:9000/zone2 is not the root of an 
> encryption zone. Do you mean /zone2?
> {code}
> The EZ creation succeeds as the path is resolved in 
> DFS#createEncryptionZone(). But while creating the Trash directory, the path 
> is not resolved and it throws the above error.
>  A fully qualified path should be supported by {{crypto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13109) Support fully qualified hdfs path in EZ commands

2018-03-06 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-13109:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   3.0.2
   2.8.4
   2.9.1
   2.10.0
   3.1.0
   Status: Resolved  (was: Patch Available)

Thanks [~hanishakoneru] for the contribution and all for the discussions and 
reviews. I've committed the patch to trunk, branch-3.1, branch-3.0, branch-2.9, 
branch-2.8 and branch-2.

> Support fully qualified hdfs path in EZ commands
> 
>
> Key: HDFS-13109
> URL: https://issues.apache.org/jira/browse/HDFS-13109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 3.0.2, 3.2.0
>
> Attachments: HDFS-13109.001.patch, HDFS-13109.002.patch, 
> HDFS-13109.003.patch, HDFS-13109.004.patch, HDFS-13109.005.patch, 
> HDFS-13109.006.patch
>
>
> When creating an Encryption Zone, if the fully qualified path is specified in 
> the path argument, it throws the following error.
> {code:java}
> ~$ hdfs crypto -createZone -keyName mykey1 -path hdfs://ns1/zone1
> IllegalArgumentException: hdfs://ns1/zone1 is not the root of an encryption 
> zone. Do you mean /zone1?
> ~$ hdfs crypto -createZone -keyName mykey1 -path "hdfs://namenode:9000/zone2" 
> IllegalArgumentException: hdfs://namenode:9000/zone2 is not the root of an 
> encryption zone. Do you mean /zone2?
> {code}
> The EZ creation succeeds as the path is resolved in 
> DFS#createEncryptionZone(). But while creating the Trash directory, the path 
> is not resolved and it throws the above error.
>  A fully qualified path should be supported by {{crypto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13109) Support fully qualified hdfs path in EZ commands

2018-03-06 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-13109:
--
Attachment: HDFS-13109.006.patch

> Support fully qualified hdfs path in EZ commands
> 
>
> Key: HDFS-13109
> URL: https://issues.apache.org/jira/browse/HDFS-13109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 3.0.2, 3.2.0
>
> Attachments: HDFS-13109.001.patch, HDFS-13109.002.patch, 
> HDFS-13109.003.patch, HDFS-13109.004.patch, HDFS-13109.005.patch, 
> HDFS-13109.006.patch
>
>
> When creating an Encryption Zone, if the fully qualified path is specified in 
> the path argument, it throws the following error.
> {code:java}
> ~$ hdfs crypto -createZone -keyName mykey1 -path hdfs://ns1/zone1
> IllegalArgumentException: hdfs://ns1/zone1 is not the root of an encryption 
> zone. Do you mean /zone1?
> ~$ hdfs crypto -createZone -keyName mykey1 -path "hdfs://namenode:9000/zone2" 
> IllegalArgumentException: hdfs://namenode:9000/zone2 is not the root of an 
> encryption zone. Do you mean /zone2?
> {code}
> The EZ creation succeeds as the path is resolved in 
> DFS#createEncryptionZone(). But while creating the Trash directory, the path 
> is not resolved and it throws the above error.
>  A fully qualified path should be supported by {{crypto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue

2018-03-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388945#comment-16388945
 ] 

Íñigo Goiri commented on HDFS-13212:


Thanks [~wuweiwei] for [^HDFS-13212-005.patch].
It probably makes sense but I cannot figure it out, can you go in detail into:
{code}
246   } else {
247 String dest = loc.getDefaultLocation().getDest();
248 if (dest.startsWith(path)) {
249   LOG.debug("Removing default cache {}", dest);
250   it.remove();
251 }
252   }
{code}
When is {{invalidateLocationCache()}} called with an scenario that would give a 
null in the source? And why do we check the default location?
Is this case tested in the unit test?

> RBF: Fix router location cache issue
> 
>
> Key: HDFS-13212
> URL: https://issues.apache.org/jira/browse/HDFS-13212
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Reporter: Weiwei Wu
>Priority: Major
> Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, 
> HDFS-13212-003.patch, HDFS-13212-004.patch, HDFS-13212-005.patch
>
>
> The MountTableResolver refreshEntries function have a bug when add a new 
> mount table entry which already have location cache. The old location cache 
> will never be invalid until this mount point change again.
> Need to invalid the location cache when add the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12654) APPEND API call is different in HTTPFS and NameNode REST

2018-03-06 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki resolved HDFS-12654.
-
Resolution: Not A Problem

I'm closing this as not-a-problem. Please reopen if you can reproduce the 500 
issue. We should update title on reopen. Thanks for pinging me, [~Sammi].

> APPEND API call is different in HTTPFS and NameNode REST
> 
>
> Key: HDFS-12654
> URL: https://issues.apache.org/jira/browse/HDFS-12654
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, httpfs, namenode
>Affects Versions: 2.6.0, 2.7.0, 2.8.0, 3.0.0-beta1
>Reporter: Andras Czesznak
>Priority: Major
>
> The APPEND REST API call behaves differently in the NameNode REST and the 
> HTTPFS codes. The NameNode version creates the target file the new data being 
> appended to if it does not exist at the time of the call issued. The HTTPFS 
> version assumes the target file exists when APPEND is called and can append 
> only the new data but does not create the target file it doesn't exist.
> The two implementations should be standardized, preferably the HTTPFS version 
> should be modified to execute an implicit CREATE if the target file does not 
> exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13212) RBF: Fix router location cache issue

2018-03-06 Thread Weiwei Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Wu updated HDFS-13212:
-
Attachment: HDFS-13212-005.patch

> RBF: Fix router location cache issue
> 
>
> Key: HDFS-13212
> URL: https://issues.apache.org/jira/browse/HDFS-13212
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Reporter: Weiwei Wu
>Priority: Major
> Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, 
> HDFS-13212-003.patch, HDFS-13212-004.patch, HDFS-13212-005.patch
>
>
> The MountTableResolver refreshEntries function have a bug when add a new 
> mount table entry which already have location cache. The old location cache 
> will never be invalid until this mount point change again.
> Need to invalid the location cache when add the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue

2018-03-06 Thread Weiwei Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388933#comment-16388933
 ] 

Weiwei Wu commented on HDFS-13212:
--

Sorry for that FindBugs error. Just fix that and add a new patch.

The MountTableResolver lookupLocation will add a default location when there is 
no fix mount point match, without this patch, this location cache will never be 
invalid until this mount point change again.

> RBF: Fix router location cache issue
> 
>
> Key: HDFS-13212
> URL: https://issues.apache.org/jira/browse/HDFS-13212
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Reporter: Weiwei Wu
>Priority: Major
> Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, 
> HDFS-13212-003.patch, HDFS-13212-004.patch
>
>
> The MountTableResolver refreshEntries function have a bug when add a new 
> mount table entry which already have location cache. The old location cache 
> will never be invalid until this mount point change again.
> Need to invalid the location cache when add the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13226) RBF: We should throw the failure validate and refuse this mount entry

2018-03-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388932#comment-16388932
 ] 

Íñigo Goiri commented on HDFS-13226:


Thanks [~maobaolong] for the examples; that makes sense.
I would go for your first proposal, switching to exceptions might be too 
disruptive.
The only thing, is you should probably still do the check:
{code}
if (this.getDestinations() == null || this.getDestinations().size() == 0) {
  LOG.error("Invalid entry, no destination paths specified ", this);
  return false;
}
{code}

> RBF: We should throw the failure validate and refuse this mount entry
> -
>
> Key: HDFS-13226
> URL: https://issues.apache.org/jira/browse/HDFS-13226
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: RBF
> Fix For: 3.2.0
>
> Attachments: HDFS-13226.001.patch
>
>
> one of the mount entry source path rule is that the source path must start 
> with '\', somebody didn't follow the rule and execute the following command:
> {code:bash}
> $ hdfs dfsrouteradmin -add addnode/ ns1 /addnode/
> {code}
> But, the console show we are successful add this entry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13224) RBF: Mount points across multiple subclusters

2018-03-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388929#comment-16388929
 ] 

Íñigo Goiri commented on HDFS-13224:


{quote}
In addition, the patch looks a little big and not convenient to review. Based 
on the order type, can we split the patch into three part:
# The basic implementation of OrderedResolver, including the LocalResolver and 
RandomResolver.
# Implement the hash resolver, including the HashFirstResolverand and 
HashResolver.
# Available space based order type could be the third part. I can help 
implement this if you are busy.
{quote}
Agreed. I may even do one just for the RouterRpcServer, let me try to split 
locally this into pieces and I may do it in 3 or 4 JIRAs.
I'll create one for the space based one and assign it to you.
I'll give it a try tomorrow.

> RBF: Mount points across multiple subclusters
> -
>
> Key: HDFS-13224
> URL: https://issues.apache.org/jira/browse/HDFS-13224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13224.000.patch, HDFS-13224.001.patch, 
> HDFS-13224.002.patch
>
>
> Currently, a mount point points to a single subcluster. We should be able to 
> spread files in a mount point across subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13226) RBF: We should throw the failure validate and refuse this mount entry

2018-03-06 Thread maobaolong (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388924#comment-16388924
 ] 

maobaolong commented on HDFS-13226:
---


{code:java}
public boolean validate() {
boolean ret = super.validate();
if (this.getSourcePath() == null || this.getSourcePath().length() == 0) {
  LOG.error("Invalid entry, no source path specified ", this);
  ret = false;
}
if (!this.getSourcePath().startsWith("/")) {
  LOG.error("Invalid entry, all mount points must start with / ", this);
  ret = false;
}
if (this.getDestinations() == null || this.getDestinations().size() == 0) {
  LOG.error("Invalid entry, no destination paths specified ", this);
  ret = false;
}
for (RemoteLocation loc : getDestinations()) {
  String nsId = loc.getNameserviceId();
  if (nsId == null || nsId.length() == 0) {
LOG.error("Invalid entry, invalid destination nameservice ", this);
ret = false;
  }
  if (loc.getDest() == null || loc.getDest().length() == 0) {
LOG.error("Invalid entry, invalid destination path ", this);
ret = false;
  }
  if (!loc.getDest().startsWith("/")) {
LOG.error("Invalid entry, all destination must start with / ", this);
ret = false;
  }
}
return ret;
  }
{code}

Let's discuss about this method. I think this method have the following problem:

- if this.getSourcePath() return null, the this.getSourcePath().startsWith("/") 
will throw NPE.
- if the sourcePath is null, the validate method will not be invoked, instead, 
the NPE will be taken place as follow stack.
{code:java}
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos$GetMountTableEntriesRequestProto$Builder.setSrcPath(HdfsServerFederationProtos.java:15937)
at 
org.apache.hadoop.hdfs.server.federation.store.protocol.impl.pb.GetMountTableEntriesRequestPBImpl.setSrcPath(GetMountTableEntriesRequestPBImpl.java:73)
at 
org.apache.hadoop.hdfs.server.federation.store.protocol.GetMountTableEntriesRequest.newInstance(GetMountTableEntriesRequest.java:38)
at 
org.apache.hadoop.hdfs.tools.federation.RouterAdmin.addMount(RouterAdmin.java:280)
at 
org.apache.hadoop.hdfs.tools.federation.RouterAdmin.addMount(RouterAdmin.java:258)
at 
org.apache.hadoop.hdfs.tools.federation.RouterAdmin.run(RouterAdmin.java:154)
and normalizeFileSystemPath method found the null or empty src or dest first.
{code}
- if the nsId is null, 
{code:bash}
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.tools.federation.RouterAdmin.addMount(RouterAdmin.java:224)
at 
org.apache.hadoop.hdfs.tools.federation.RouterAdmin.run(RouterAdmin.java:154)
{code}
- if the source start with more than one '/', the entry will be create 
successfully as a different mount entry as the single '/' version.
for example
{code:bash}
hdfs dfsrouteradmin -add //addnode/ ns1 //addnode/
hdfs dfsrouteradmin -add /addnode/ ns1 /addnode/

Mount Table Entries:
SourceDestinations  Owner 
Group Mode  Quota/Usage
//addnode/ns1->//addnode/   hadp  
hadp  rwxr-xr-x [NsQuota: -/-, SsQuota: -/-]
/addnode  ns1->/addnode hadp  
hadp  rwxr-xr-x [NsQuota: -/-, SsQuota: -/-]
{code}
- if an error have occurred, should we check the following problem?
So, I want to modify to the following version of validate:
{code:java}
public boolean validate() {
if (!super.validate()) {
  return false;
}
if (!this.getSourcePath().startsWith("/")
|| this.getSourcePath().startsWith("//")) {
  LOG.error("Invalid entry, all mount points must start with a single / ", 
this);
  return false;
}
for (RemoteLocation loc : getDestinations()) {
  String nsId = loc.getNameserviceId();
  if (nsId.length() == 0) {
LOG.error("Invalid entry, invalid destination nameservice ", this);
return false;
  } else if (!loc.getDest().startsWith("/") || 
this.getSourcePath().startsWith("//")) {
LOG.error("Invalid entry, all destination must start with a single / ", 
this);
return false;
  }
}
return true;
  }
{code}
Or 
{code:java}
 @Override
  public boolean validate() {
if (!super.validate()) {
  return false;
}
if (!this.getSourcePath().startsWith("/")
|| this.getSourcePath().startsWith("//")) {
  throw new IllegalArgumentException("Invalid entry, all mount points must 
start with a single / ");
}
for (RemoteLocation loc : getDestinations()) {
  String nsId = loc.getNameserviceId();
  if (nsId.length() == 0) {
throw new 

[jira] [Commented] (HDFS-13214) RBF: Configuration on Router conflicts with client side configuration

2018-03-06 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388923#comment-16388923
 ] 

Yiqun Lin commented on HDFS-13214:
--

[~Tao Jie], I suppose you maybe a little busy recently. I will hold off 
committing until the end of today. If you have other comments after the commit, 
we can file another Jira for the discussion again.

> RBF: Configuration on Router conflicts with client side configuration
> -
>
> Key: HDFS-13214
> URL: https://issues.apache.org/jira/browse/HDFS-13214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Tao Jie
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDFS-13214.001.patch, HDFS-13214.002.patch, 
> HDFS-13214.003.patch, HDFS-13214.004.patch
>
>
> In a typical router-based federation cluster, hdfs-site.xml is supposed to be:
> {code}
> 
> dfs.nameservices
> ns1,ns2,ns-fed
>   
>   
> dfs.ha.namenodes.ns-fed
> r1,r2
>   
>   
> dfs.namenode.rpc-address.ns1
> host1:8020
>   
>   
> dfs.namenode.rpc-address.ns2
> host2:8020
>   
>   
> dfs.namenode.rpc-address.ns-fed.r1
> host1:
>   
>   
> dfs.namenode.rpc-address.ns-fed.r2
> host2:
>   
> {code}
> {{dfs.ha.namenodes.ns-fed}} here is used for client to access the Router. 
> However with this configuration on server node, Router fails to start with 
> error:
> {code}
> org.apache.hadoop.HadoopIllegalArgumentException: Configuration has multiple 
> addresses that match local node's address. Please configure the system with 
> dfs.nameservice.id and dfs.ha.namenode.id
> at org.apache.hadoop.hdfs.DFSUtil.getSuffixIDs(DFSUtil.java:1198)
> at org.apache.hadoop.hdfs.DFSUtil.getNameServiceId(DFSUtil.java:1131)
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNamenodeNameServiceId(DFSUtil.java:1086)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createLocalNamenodeHearbeatService(Router.java:466)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createNamenodeHearbeatServices(Router.java:423)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:199)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter.main(DFSRouter.java:69)
> 2018-03-01 18:05:56,208 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter: Failed to start 
> router
> {code}
> Then the router tries to find the local namenode, multiple properties: 
> {{dfs.namenode.rpc-address.ns1}}, {{dfs.namenode.rpc-address.ns-fed.r1}} 
> match the local address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13238) Missing EC Data block throws warn message with full stackTrace

2018-03-06 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDFS-13238:
-

Assignee: Bharat Viswanadham

> Missing EC Data block throws warn message with full stackTrace
> --
>
> Key: HDFS-13238
> URL: https://issues.apache.org/jira/browse/HDFS-13238
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, hdfs
>Reporter: Hanisha Koneru
>Assignee: Bharat Viswanadham
>Priority: Major
>
> If an EC data block is missing/ corrupted, then the following warning message 
> is thrown when client tries to read the file. 
> {code:java}
> $ hdfs dfs -get /user/abc/file1
> 2018-03-06 22:53:32,156 WARN impl.BlockReaderFactory: I/O error constructing 
> remote block reader.
> java.io.IOException: Got error, status=ERROR, status message opReadBlock 
> BP-1641043599-127.0.0.1-1520368608283:blk_-9223372036854775776_1002 received 
> exception java.io.FileNotFoundException: BlockId -9223372036854775776 is not 
> valid., for OP_READ_BLOCK, self=/127.0.0.1.0.2:60502, remote=/127.0.0.1:9866, 
> for file /user/abc/file1, for pool BP-1641043599-127.0.0.1-1520368608283 
> block -9223372036854775776_1002
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:110)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.checkSuccess(BlockReaderRemote.java:447)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.newBlockReader(BlockReaderRemote.java:415)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:860)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:756)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:390)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:644)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader(DFSStripedInputStream.java:256)
> at org.apache.hadoop.hdfs.StripeReader.readChunk(StripeReader.java:293)
> at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:323)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:318)
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:391)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:815)
> at java.io.DataInputStream.read(DataInputStream.java:100)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:94)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:68)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:129)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:485)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:407)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:342)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:277)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:262)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:303)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:257)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:285)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:269)
> at 
> org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:228)
> at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:120)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
> 2018-03-06 22:53:32,167 WARN hdfs.DFSClient: Failed to connect to 
> /127.0.0.1:9866 for 
> blockBP-1641043599-127.0.0.1-1520368608283:blk_-9223372036854775776_1002
> java.io.IOException: Got error, status=ERROR, status message opReadBlock 
> BP-1641043599-127.0.0.1-1520368608283:blk_-9223372036854775776_1002 received 
> exception java.io.FileNotFoundException: BlockId -9223372036854775776 is not 
> valid., for OP_READ_BLOCK, self=/127.0.0.1:60502, remote=/127.0.0.1:9866, for 
> 

[jira] [Commented] (HDFS-13224) RBF: Mount points across multiple subclusters

2018-03-06 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388917#comment-16388917
 ] 

Yiqun Lin commented on HDFS-13224:
--

Hi [~elgoiri],
{quote}That sounds good. My only concern is the size of the patch (I'm even 
considering removing some stuff from the current one). What about doing it in a 
follow-up JIRA?
{quote}
Yes, we can file anther JIRA for tracking this and using the same idea in 
HDFS-8131 that [~ywskycn] mentioned.

In addition, the patch looks a little big and not convenient to review. Based 
on the order type, can we split the patch into three part:
 * The basic implementation of OrderedResolver, including the LocalResolver and 
RandomResolver.
 * Implement the hash resolver, including the HashFirstResolverand and 
HashResolver.
 * Available space based order type could be the third part. I can help 
implement this if you are busy.

> RBF: Mount points across multiple subclusters
> -
>
> Key: HDFS-13224
> URL: https://issues.apache.org/jira/browse/HDFS-13224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13224.000.patch, HDFS-13224.001.patch, 
> HDFS-13224.002.patch
>
>
> Currently, a mount point points to a single subcluster. We should be able to 
> spread files in a mount point across subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13239) Fix non-empty dir warning message when setting default EC policy

2018-03-06 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDFS-13239:
-

Assignee: Bharat Viswanadham

> Fix non-empty dir warning message when setting default EC policy
> 
>
> Key: HDFS-13239
> URL: https://issues.apache.org/jira/browse/HDFS-13239
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Bharat Viswanadham
>Priority: Minor
>
> When EC policy is set on a non-empty directory, the following warning message 
> is given:
> {code}
> $hdfs ec -setPolicy -policy RS-6-3-1024k -path /ec1
> Warning: setting erasure coding policy on a non-empty directory will not 
> automatically convert existing files to RS-6-3-1024k
> {code}
> When we do not specify the -policy parameter when setting EC policy on a 
> directory, it takes the default EC policy. Setting default EC policy in this 
> way on a non-empty directory gives the following warning message:
> {code}
> $hdfs ec -setPolicy -path /ec2
> Warning: setting erasure coding policy on a non-empty directory will not 
> automatically convert existing files to null
> {code}
> Notice that the warning message in the 2nd case has the ecPolicy name shown 
> as null. We should instead give the default EC policy name in this message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13225) StripeReader#checkMissingBlocks() 's IOException info is incomplete

2018-03-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388904#comment-16388904
 ] 

genericqa commented on HDFS-13225:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
24s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13225 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913299/HDFS-13225.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 19acb8e123c1 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 346caa2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23325/testReport/ |
| Max. process+thread count | 292 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23325/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Commented] (HDFS-12999) When reach the end of the block group, it may not need to flush all the data packets(flushAllInternals) twice.

2018-03-06 Thread lufei (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388862#comment-16388862
 ] 

lufei commented on HDFS-12999:
--

Can you help review this issue? [~hanishakoneru] ,thank you very much.

> When reach the end of the block group, it may not need to flush all the data 
> packets(flushAllInternals) twice. 
> ---
>
> Key: HDFS-12999
> URL: https://issues.apache.org/jira/browse/HDFS-12999
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, hdfs-client
>Affects Versions: 3.0.0-beta1, 3.1.0
>Reporter: lufei
>Assignee: lufei
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HDFS-12999.001.patch, HDFS-12999.002.patch
>
>
> In order to make the process simplification. It's no need to flush all the 
> data packets(flushAllInternals) twice,when reach the end of the block group.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-12999) When reach the end of the block group, it may not need to flush all the data packets(flushAllInternals) twice.

2018-03-06 Thread lufei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lufei updated HDFS-12999:
-
Comment: was deleted

(was: [~eddyxu] , can you help me take a look at this issue? Thanks.)

> When reach the end of the block group, it may not need to flush all the data 
> packets(flushAllInternals) twice. 
> ---
>
> Key: HDFS-12999
> URL: https://issues.apache.org/jira/browse/HDFS-12999
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, hdfs-client
>Affects Versions: 3.0.0-beta1, 3.1.0
>Reporter: lufei
>Assignee: lufei
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HDFS-12999.001.patch, HDFS-12999.002.patch
>
>
> In order to make the process simplification. It's no need to flush all the 
> data packets(flushAllInternals) twice,when reach the end of the block group.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13225) StripeReader#checkMissingBlocks() 's IOException info is incomplete

2018-03-06 Thread lufei (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388846#comment-16388846
 ] 

lufei commented on HDFS-13225:
--

Thank you for the review, [~hanishakoneru].

According to your opinion, I replace '\n' with semi-colon in the 002.patch.

> StripeReader#checkMissingBlocks() 's IOException info is incomplete
> ---
>
> Key: HDFS-13225
> URL: https://issues.apache.org/jira/browse/HDFS-13225
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, hdfs-client
>Affects Versions: 3.1.0, 3.2.0
>Reporter: lufei
>Assignee: lufei
>Priority: Major
> Attachments: HDFS-13225.001.patch, HDFS-13225.002.patch
>
>
> When the file's ErasureCodingPolicy is XOR-3-2-128k, then stop 3 datanodes 
> which were used by the block. With the following op(read the file), the 
> exception message was incomplete, because of the class of LocatedBlocks's 
> info was not displayed complete.
>  
> {color:#707070}hadoop@EC102:/home/lufei> hadoop fs -get 
> /lufei/fsimage_00268172191_140 test112{color}
> {color:#707070}get: 3 missing blocks, the stripe is: AlignedStripe(Offset=0, 
> length=131072, fetchedChunksNum=0, missingChunksNum=3); locatedBlocks is: 
> {color:#d04437}LocatedBlocks{{color}{color}
> {color:#707070}hadoop@EC102:/home/lufei>{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13225) StripeReader#checkMissingBlocks() 's IOException info is incomplete

2018-03-06 Thread lufei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lufei updated HDFS-13225:
-
Attachment: HDFS-13225.002.patch

> StripeReader#checkMissingBlocks() 's IOException info is incomplete
> ---
>
> Key: HDFS-13225
> URL: https://issues.apache.org/jira/browse/HDFS-13225
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, hdfs-client
>Affects Versions: 3.1.0, 3.2.0
>Reporter: lufei
>Assignee: lufei
>Priority: Major
> Attachments: HDFS-13225.001.patch, HDFS-13225.002.patch
>
>
> When the file's ErasureCodingPolicy is XOR-3-2-128k, then stop 3 datanodes 
> which were used by the block. With the following op(read the file), the 
> exception message was incomplete, because of the class of LocatedBlocks's 
> info was not displayed complete.
>  
> {color:#707070}hadoop@EC102:/home/lufei> hadoop fs -get 
> /lufei/fsimage_00268172191_140 test112{color}
> {color:#707070}get: 3 missing blocks, the stripe is: AlignedStripe(Offset=0, 
> length=131072, fetchedChunksNum=0, missingChunksNum=3); locatedBlocks is: 
> {color:#d04437}LocatedBlocks{{color}{color}
> {color:#707070}hadoop@EC102:/home/lufei>{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13176) WebHdfs file path gets truncated when having semicolon (;) inside

2018-03-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388786#comment-16388786
 ] 

genericqa commented on HDFS-13176:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 47s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
26s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}137m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}193m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13176 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913266/HDFS-13176.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1aafbed7d1df 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build 

[jira] [Created] (HDFS-13239) Fix non-empty dir warning message when setting default EC policy

2018-03-06 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDFS-13239:
-

 Summary: Fix non-empty dir warning message when setting default EC 
policy
 Key: HDFS-13239
 URL: https://issues.apache.org/jira/browse/HDFS-13239
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Hanisha Koneru


When EC policy is set on a non-empty directory, the following warning message 
is given:

{code}
$hdfs ec -setPolicy -policy RS-6-3-1024k -path /ec1
Warning: setting erasure coding policy on a non-empty directory will not 
automatically convert existing files to RS-6-3-1024k
{code}

When we do not specify the -policy parameter when setting EC policy on a 
directory, it takes the default EC policy. Setting default EC policy in this 
way on a non-empty directory gives the following warning message:

{code}
$hdfs ec -setPolicy -path /ec2
Warning: setting erasure coding policy on a non-empty directory will not 
automatically convert existing files to null
{code}
Notice that the warning message in the 2nd case has the ecPolicy name shown as 
null. We should instead give the default EC policy name in this message.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13238) Missing EC Data block throws warn message with full stackTrace

2018-03-06 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDFS-13238:
-

 Summary: Missing EC Data block throws warn message with full 
stackTrace
 Key: HDFS-13238
 URL: https://issues.apache.org/jira/browse/HDFS-13238
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding, hdfs
Reporter: Hanisha Koneru


If an EC data block is missing/ corrupted, then the following warning message 
is thrown when client tries to read the file. 
{code:java}
$ hdfs dfs -get /user/abc/file1
2018-03-06 22:53:32,156 WARN impl.BlockReaderFactory: I/O error constructing 
remote block reader.
java.io.IOException: Got error, status=ERROR, status message opReadBlock 
BP-1641043599-127.0.0.1-1520368608283:blk_-9223372036854775776_1002 received 
exception java.io.FileNotFoundException: BlockId -9223372036854775776 is not 
valid., for OP_READ_BLOCK, self=/127.0.0.1.0.2:60502, remote=/127.0.0.1:9866, 
for file /user/abc/file1, for pool BP-1641043599-127.0.0.1-1520368608283 block 
-9223372036854775776_1002
at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:110)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.checkSuccess(BlockReaderRemote.java:447)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.newBlockReader(BlockReaderRemote.java:415)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:860)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:756)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:390)
at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:644)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.createBlockReader(DFSStripedInputStream.java:256)
at org.apache.hadoop.hdfs.StripeReader.readChunk(StripeReader.java:293)
at org.apache.hadoop.hdfs.StripeReader.readStripe(StripeReader.java:323)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readOneStripe(DFSStripedInputStream.java:318)
at 
org.apache.hadoop.hdfs.DFSStripedInputStream.readWithStrategy(DFSStripedInputStream.java:391)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:815)
at java.io.DataInputStream.read(DataInputStream.java:100)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:94)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:68)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:129)
at 
org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:485)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:407)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:342)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:277)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:262)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:303)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:257)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:285)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:269)
at 
org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:228)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:120)
at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:391)
2018-03-06 22:53:32,167 WARN hdfs.DFSClient: Failed to connect to 
/127.0.0.1:9866 for 
blockBP-1641043599-127.0.0.1-1520368608283:blk_-9223372036854775776_1002
java.io.IOException: Got error, status=ERROR, status message opReadBlock 
BP-1641043599-127.0.0.1-1520368608283:blk_-9223372036854775776_1002 received 
exception java.io.FileNotFoundException: BlockId -9223372036854775776 is not 
valid., for OP_READ_BLOCK, self=/127.0.0.1:60502, remote=/127.0.0.1:9866, for 
file /user/abc/file1, for pool BP-1641043599-127.0.0.1-1520368608283 block 
-9223372036854775776_1002
at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:110)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.checkSuccess(BlockReaderRemote.java:447)

[jira] [Commented] (HDFS-13109) Support fully qualified hdfs path in EZ commands

2018-03-06 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388688#comment-16388688
 ] 

Xiaoyu Yao commented on HDFS-13109:
---

Agree with [~shahrs87] that we can remove the unused import at commit to save 
the Jenkins cycle.

In the meanwhile, I will post a new patch that matches with the commit using 
git format-patch in case cherry-pick is not available. Thanks!

 

> Support fully qualified hdfs path in EZ commands
> 
>
> Key: HDFS-13109
> URL: https://issues.apache.org/jira/browse/HDFS-13109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13109.001.patch, HDFS-13109.002.patch, 
> HDFS-13109.003.patch, HDFS-13109.004.patch, HDFS-13109.005.patch
>
>
> When creating an Encryption Zone, if the fully qualified path is specified in 
> the path argument, it throws the following error.
> {code:java}
> ~$ hdfs crypto -createZone -keyName mykey1 -path hdfs://ns1/zone1
> IllegalArgumentException: hdfs://ns1/zone1 is not the root of an encryption 
> zone. Do you mean /zone1?
> ~$ hdfs crypto -createZone -keyName mykey1 -path "hdfs://namenode:9000/zone2" 
> IllegalArgumentException: hdfs://namenode:9000/zone2 is not the root of an 
> encryption zone. Do you mean /zone2?
> {code}
> The EZ creation succeeds as the path is resolved in 
> DFS#createEncryptionZone(). But while creating the Trash directory, the path 
> is not resolved and it throws the above error.
>  A fully qualified path should be supported by {{crypto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13109) Support fully qualified hdfs path in EZ commands

2018-03-06 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388664#comment-16388664
 ] 

Daniel Templeton commented on HDFS-13109:
-

I'd really rather we fix that in the patch.  It's very useful to have the 
patches on the JIRAs line up with the commits.

> Support fully qualified hdfs path in EZ commands
> 
>
> Key: HDFS-13109
> URL: https://issues.apache.org/jira/browse/HDFS-13109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13109.001.patch, HDFS-13109.002.patch, 
> HDFS-13109.003.patch, HDFS-13109.004.patch, HDFS-13109.005.patch
>
>
> When creating an Encryption Zone, if the fully qualified path is specified in 
> the path argument, it throws the following error.
> {code:java}
> ~$ hdfs crypto -createZone -keyName mykey1 -path hdfs://ns1/zone1
> IllegalArgumentException: hdfs://ns1/zone1 is not the root of an encryption 
> zone. Do you mean /zone1?
> ~$ hdfs crypto -createZone -keyName mykey1 -path "hdfs://namenode:9000/zone2" 
> IllegalArgumentException: hdfs://namenode:9000/zone2 is not the root of an 
> encryption zone. Do you mean /zone2?
> {code}
> The EZ creation succeeds as the path is resolved in 
> DFS#createEncryptionZone(). But while creating the Trash directory, the path 
> is not resolved and it throws the above error.
>  A fully qualified path should be supported by {{crypto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13109) Support fully qualified hdfs path in EZ commands

2018-03-06 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388659#comment-16388659
 ] 

Rushabh S Shah commented on HDFS-13109:
---

I am sorry to point out that there is 1 checkstyle warning in latest patch.
Lets not waste build resource to fix 1 checkstyle warning.
[~xyao] can edit that file while committing.

> Support fully qualified hdfs path in EZ commands
> 
>
> Key: HDFS-13109
> URL: https://issues.apache.org/jira/browse/HDFS-13109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13109.001.patch, HDFS-13109.002.patch, 
> HDFS-13109.003.patch, HDFS-13109.004.patch, HDFS-13109.005.patch
>
>
> When creating an Encryption Zone, if the fully qualified path is specified in 
> the path argument, it throws the following error.
> {code:java}
> ~$ hdfs crypto -createZone -keyName mykey1 -path hdfs://ns1/zone1
> IllegalArgumentException: hdfs://ns1/zone1 is not the root of an encryption 
> zone. Do you mean /zone1?
> ~$ hdfs crypto -createZone -keyName mykey1 -path "hdfs://namenode:9000/zone2" 
> IllegalArgumentException: hdfs://namenode:9000/zone2 is not the root of an 
> encryption zone. Do you mean /zone2?
> {code}
> The EZ creation succeeds as the path is resolved in 
> DFS#createEncryptionZone(). But while creating the Trash directory, the path 
> is not resolved and it throws the above error.
>  A fully qualified path should be supported by {{crypto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13109) Support fully qualified hdfs path in EZ commands

2018-03-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388644#comment-16388644
 ] 

genericqa commented on HDFS-13109:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 57s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
65 unchanged - 0 fixed = 66 total (was 65) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
33s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}119m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}187m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterQuota |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13109 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913247/HDFS-13109.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bfc78705f3ec 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality 

[jira] [Comment Edited] (HDFS-13148) Unit test for EZ with KMS and Federation

2018-03-06 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388630#comment-16388630
 ] 

Rushabh S Shah edited comment on HDFS-13148 at 3/6/18 10:26 PM:


Thanks [~hanishakoneru] for working on this.
I haven't gone through the whole patch but have one basic question.
 Can't {{TestEncryptionZonesWithKMSandFederation}} just extend 
{{TestEncryptionZonesWithKMS}} and override {{createCluster}}.
 {{TestEncryptionZones}} will have createCluster method with one namenode and 
{{TestEncryptionZonesWithKMSandFederation}} will override and create 
{{simpleFederatedTopology}}.
 Does it make sense ?
 That way we will have hierarchy. {{TestEncryptionZonesWithKMSandFederation}} 
--> {{TestEncryptionZonesWithKMS}} --> {{TestEncryptionZones}}


was (Author: shahrs87):
Thanks [~hanishakoneru] for working on this.
Can't {{TestEncryptionZonesWithKMSandFederation}} just extend 
{{TestEncryptionZonesWithKMS}} and override {{createCluster}}.
 {{TestEncryptionZones}} will have createCluster method with one namenode and 
{{TestEncryptionZonesWithKMSandFederation}} will override and create 
{{simpleFederatedTopology}}.
Does it make sense ?
That way we will have hierarchy.  {{TestEncryptionZonesWithKMSandFederation}} 
--> {{TestEncryptionZonesWithKMS}} --> {{TestEncryptionZones}}

> Unit test for EZ with KMS and Federation
> 
>
> Key: HDFS-13148
> URL: https://issues.apache.org/jira/browse/HDFS-13148
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13148.001.patch, HDFS-13148.002.patch, 
> HDFS-13148.003.patch
>
>
> It would be good to have some unit tests for testing KMS and EZ on a 
> federated cluster. We can start with basic EZ operations. For example, create 
> EZs on two namespaces with different keys using one KMS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13148) Unit test for EZ with KMS and Federation

2018-03-06 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388630#comment-16388630
 ] 

Rushabh S Shah commented on HDFS-13148:
---

Thanks [~hanishakoneru] for working on this.
Can't {{TestEncryptionZonesWithKMSandFederation}} just extend 
{{TestEncryptionZonesWithKMS}} and override {{createCluster}}.
 {{TestEncryptionZones}} will have createCluster method with one namenode and 
{{TestEncryptionZonesWithKMSandFederation}} will override and create 
{{simpleFederatedTopology}}.
Does it make sense ?
That way we will have hierarchy.  {{TestEncryptionZonesWithKMSandFederation}} 
--> {{TestEncryptionZonesWithKMS}} --> {{TestEncryptionZones}}

> Unit test for EZ with KMS and Federation
> 
>
> Key: HDFS-13148
> URL: https://issues.apache.org/jira/browse/HDFS-13148
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13148.001.patch, HDFS-13148.002.patch, 
> HDFS-13148.003.patch
>
>
> It would be good to have some unit tests for testing KMS and EZ on a 
> federated cluster. We can start with basic EZ operations. For example, create 
> EZs on two namespaces with different keys using one KMS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13195) DataNode conf page cannot display the current value after reconfig

2018-03-06 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388627#comment-16388627
 ] 

Kihwal Lee commented on HDFS-13195:
---

The HttpServer2's conf servlet is supposed to show the http server conf. And it 
is common to use a separate instance of conf for a service to avoid unintended 
runtime conf change propagation. As it is a common pattern and code, I do not 
feel comfortable about changing it only for datanode.  We can discuss about a 
general semantics change of the conf servlet here if necessary. E.g. separate 
the conf that the http server uses from what it serves through the servlet. 
That way, each hadoop service can set the conf instance that it wants to show, 
or it can even decide to show nothing.  Optionally, we can also introduce a 
separate servlet for DatanodeHttpServer, just to show the datanode config.

> DataNode conf page  cannot display the current value after reconfig
> ---
>
> Key: HDFS-13195
> URL: https://issues.apache.org/jira/browse/HDFS-13195
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Minor
> Fix For: 2.7.1
>
> Attachments: HDFS-13195-branch-2.7.001.patch, HDFS-13195.001.patch
>
>
> Now the branch-2.7 support dfs.datanode.data.dir reconfig, but after i 
> reconfig this key, the conf page's value is still the old config value.
> The reason is that:
> {code:java}
> public DatanodeHttpServer(final Configuration conf,
>   final DataNode datanode,
>   final ServerSocketChannel externalHttpChannel)
> throws IOException {
> this.conf = conf;
> Configuration confForInfoServer = new Configuration(conf);
> confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS, 10);
> HttpServer2.Builder builder = new HttpServer2.Builder()
> .setName("datanode")
> .setConf(confForInfoServer)
> .setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
> .hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
> .addEndpoint(URI.create("http://localhost:0;))
> .setFindPort(true);
> this.infoServer = builder.build();
> {code}
> The confForInfoServer is a new configuration instance, while the dfsadmin 
> reconfig the datanode's config, the config result cannot reflect to 
> confForInfoServer, so we should use the datanode's conf.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13227) Add a method to calculate cumulative diff over multiple snapshots in DirectoryDiffList

2018-03-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388612#comment-16388612
 ] 

Hudson commented on HDFS-13227:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13783 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13783/])
HDFS-13227. Add a method to calculate cumulative diff over multiple (szetszwo: 
rev 346caa209571dedf1331b2658d5702b45dd40bfe)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java


> Add a method to  calculate cumulative diff over multiple snapshots in 
> DirectoryDiffList
> ---
>
> Key: HDFS-13227
> URL: https://issues.apache.org/jira/browse/HDFS-13227
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HDFS-13227.001.patch
>
>
> This Jira proposes to add an API in DirectoryWithSnapshotFeature#f 
> DirectoryDiffList which will return minimal list of diffs needed to combine 
> to get the cumulative diff between two given snapshots. The same method will 
> be made use while constructing the childrenList for a directory. 
> DirectoryWithSnapshotFeature#getChildrenList and 
> DirectoryWithSnapshotFeature#computeDiffBetweenSnapshots will make use of the 
> same method to get the cumulative diff. When snapshotSkipList, with minimal 
> set of diffs to combine in order to get the cumulative diff, the overall 
> computation will be faster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13227) Add a method to calculate cumulative diff over multiple snapshots in DirectoryDiffList

2018-03-06 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-13227:
---
   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Shashi!

> Add a method to  calculate cumulative diff over multiple snapshots in 
> DirectoryDiffList
> ---
>
> Key: HDFS-13227
> URL: https://issues.apache.org/jira/browse/HDFS-13227
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HDFS-13227.001.patch
>
>
> This Jira proposes to add an API in DirectoryWithSnapshotFeature#f 
> DirectoryDiffList which will return minimal list of diffs needed to combine 
> to get the cumulative diff between two given snapshots. The same method will 
> be made use while constructing the childrenList for a directory. 
> DirectoryWithSnapshotFeature#getChildrenList and 
> DirectoryWithSnapshotFeature#computeDiffBetweenSnapshots will make use of the 
> same method to get the cumulative diff. When snapshotSkipList, with minimal 
> set of diffs to combine in order to get the cumulative diff, the overall 
> computation will be faster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13148) Unit test for EZ with KMS and Federation

2018-03-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388574#comment-16388574
 ] 

genericqa commented on HDFS-13148:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}117m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks 
|
|   | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13148 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913243/HDFS-13148.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 92fddcaf7cb6 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9276ef0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23322/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23322/testReport/ |
| Max. process+thread count | 3375 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-13176) WebHdfs file path gets truncated when having semicolon (;) inside

2018-03-06 Thread Zsolt Venczel (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388564#comment-16388564
 ] 

Zsolt Venczel commented on HDFS-13176:
--

Thanks [~mackrorysd] it sounds good.

I'll update the patch with the extended test case.

> WebHdfs file path gets truncated when having semicolon (;) inside
> -
>
> Key: HDFS-13176
> URL: https://issues.apache.org/jira/browse/HDFS-13176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13176.01.patch, HDFS-13176.02.patch, 
> TestWebHdfsUrl.testWebHdfsSpecialCharacterFile.patch
>
>
> Find attached a patch having a test case that tries to reproduce the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13227) Add a method to calculate cumulative diff over multiple snapshots in DirectoryDiffList

2018-03-06 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-13227:
---
Priority: Minor  (was: Major)
Hadoop Flags: Reviewed

+1 patch looks good.

> Add a method to  calculate cumulative diff over multiple snapshots in 
> DirectoryDiffList
> ---
>
> Key: HDFS-13227
> URL: https://issues.apache.org/jira/browse/HDFS-13227
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Minor
> Attachments: HDFS-13227.001.patch
>
>
> This Jira proposes to add an API in DirectoryWithSnapshotFeature#f 
> DirectoryDiffList which will return minimal list of diffs needed to combine 
> to get the cumulative diff between two given snapshots. The same method will 
> be made use while constructing the childrenList for a directory. 
> DirectoryWithSnapshotFeature#getChildrenList and 
> DirectoryWithSnapshotFeature#computeDiffBetweenSnapshots will make use of the 
> same method to get the cumulative diff. When snapshotSkipList, with minimal 
> set of diffs to combine in order to get the cumulative diff, the overall 
> computation will be faster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13176) WebHdfs file path gets truncated when having semicolon (;) inside

2018-03-06 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388560#comment-16388560
 ] 

Sean Mackrory commented on HDFS-13176:
--

In light of other ascii / unicode characters being legal, I added everything I 
could from a standard keyboard to the test, still passes. Sound good to you, 
[~zvenczel]?

> WebHdfs file path gets truncated when having semicolon (;) inside
> -
>
> Key: HDFS-13176
> URL: https://issues.apache.org/jira/browse/HDFS-13176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13176.01.patch, HDFS-13176.02.patch, 
> TestWebHdfsUrl.testWebHdfsSpecialCharacterFile.patch
>
>
> Find attached a patch having a test case that tries to reproduce the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13176) WebHdfs file path gets truncated when having semicolon (;) inside

2018-03-06 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HDFS-13176:
-
Attachment: HDFS-13176.02.patch

> WebHdfs file path gets truncated when having semicolon (;) inside
> -
>
> Key: HDFS-13176
> URL: https://issues.apache.org/jira/browse/HDFS-13176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13176.01.patch, HDFS-13176.02.patch, 
> TestWebHdfsUrl.testWebHdfsSpecialCharacterFile.patch
>
>
> Find attached a patch having a test case that tries to reproduce the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10276) HDFS should not expose path info that user has no permission to see.

2018-03-06 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-10276:

Labels: security  (was: )

> HDFS should not expose path info that user has no permission to see.
> 
>
> Key: HDFS-10276
> URL: https://issues.apache.org/jira/browse/HDFS-10276
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, security
>Reporter: Kevin Cox
>Assignee: Yuanbo Liu
>Priority: Major
>  Labels: security
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-10276.001.patch, HDFS-10276.002.patch, 
> HDFS-10276.003.patch, HDFS-10276.004.patch, HDFS-10276.005.patch, 
> HDFS-10276.006.patch
>
>
> This following issue is remedied by HDFS-5802.
> {quote}
> Given you have a file {{/file}} an existence check for the path 
> {{/file/whatever}} will give different responses for different 
> implementations of FileSystem.
> LocalFileSystem will return false while DistributedFileSystem will throw 
> {{org.apache.hadoop.security.AccessControlException: Permission denied: ..., 
> access=EXECUTE, ...}}
> {quote}
> However, HDFS-5802 may expose information about a path that user doesn't have 
> permission to see. 
> For example, if the user asks for /a/b/c, but does not have permission to 
> list /a, we should not complain about /a/b



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10276) HDFS should not expose path info that user has no permission to see.

2018-03-06 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-10276:

Component/s: security
 fs

> HDFS should not expose path info that user has no permission to see.
> 
>
> Key: HDFS-10276
> URL: https://issues.apache.org/jira/browse/HDFS-10276
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs, security
>Reporter: Kevin Cox
>Assignee: Yuanbo Liu
>Priority: Major
>  Labels: security
> Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HDFS-10276.001.patch, HDFS-10276.002.patch, 
> HDFS-10276.003.patch, HDFS-10276.004.patch, HDFS-10276.005.patch, 
> HDFS-10276.006.patch
>
>
> This following issue is remedied by HDFS-5802.
> {quote}
> Given you have a file {{/file}} an existence check for the path 
> {{/file/whatever}} will give different responses for different 
> implementations of FileSystem.
> LocalFileSystem will return false while DistributedFileSystem will throw 
> {{org.apache.hadoop.security.AccessControlException: Permission denied: ..., 
> access=EXECUTE, ...}}
> {quote}
> However, HDFS-5802 may expose information about a path that user doesn't have 
> permission to see. 
> For example, if the user asks for /a/b/c, but does not have permission to 
> list /a, we should not complain about /a/b



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13109) Support fully qualified hdfs path in EZ commands

2018-03-06 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388547#comment-16388547
 ] 

Xiaoyu Yao commented on HDFS-13109:
---

+1 pending Jenkins.

> Support fully qualified hdfs path in EZ commands
> 
>
> Key: HDFS-13109
> URL: https://issues.apache.org/jira/browse/HDFS-13109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13109.001.patch, HDFS-13109.002.patch, 
> HDFS-13109.003.patch, HDFS-13109.004.patch, HDFS-13109.005.patch
>
>
> When creating an Encryption Zone, if the fully qualified path is specified in 
> the path argument, it throws the following error.
> {code:java}
> ~$ hdfs crypto -createZone -keyName mykey1 -path hdfs://ns1/zone1
> IllegalArgumentException: hdfs://ns1/zone1 is not the root of an encryption 
> zone. Do you mean /zone1?
> ~$ hdfs crypto -createZone -keyName mykey1 -path "hdfs://namenode:9000/zone2" 
> IllegalArgumentException: hdfs://namenode:9000/zone2 is not the root of an 
> encryption zone. Do you mean /zone2?
> {code}
> The EZ creation succeeds as the path is resolved in 
> DFS#createEncryptionZone(). But while creating the Trash directory, the path 
> is not resolved and it throws the above error.
>  A fully qualified path should be supported by {{crypto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13176) WebHdfs file path gets truncated when having semicolon (;) inside

2018-03-06 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388339#comment-16388339
 ] 

Sean Mackrory edited comment on HDFS-13176 at 3/6/18 9:13 PM:
--

Thanks to [~anu] for digging up the appropriate documentation of legal paths - 
I had missed it looking at HDFS-specific stuff, but the documentation is 
Common-wide, which makes sense. The gist is that ':' is indeed illegal, but all 
of the other characters you're testing and the others I mentioned (that I 
didn't want to support) are supposed to work. +1 to your fix - I'll commit it 
but will give another day in case the folks you tagged want to chime in. We may 
as well add other characters that are supposed to work to verify they do and 
help keep it that way, too.

 

edit: for context, this is what Anu linked to on the mailing list thread: 
http://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-common/filesystem/introduction.html#Path_Names


was (Author: mackrorysd):
Thanks to [~anu] for digging up the appropriate documentation of legal paths - 
I had missed it looking at HDFS-specific stuff, but the documentation is 
Common-wide, which makes sense. The gist is that ':' is indeed illegal, but all 
of the other characters you're testing and the others I mentioned (that I 
didn't want to support) are supposed to work. +1 to your fix - I'll commit it 
but will give another day in case the folks you tagged want to chime in. We may 
as well add other characters that are supposed to work to verify they do and 
help keep it that way, too.

> WebHdfs file path gets truncated when having semicolon (;) inside
> -
>
> Key: HDFS-13176
> URL: https://issues.apache.org/jira/browse/HDFS-13176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13176.01.patch, 
> TestWebHdfsUrl.testWebHdfsSpecialCharacterFile.patch
>
>
> Find attached a patch having a test case that tries to reproduce the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13223) Reduce DiffListBySkipList memory usage

2018-03-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388505#comment-16388505
 ] 

Hudson commented on HDFS-13223:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13782 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13782/])
HDFS-13223. Reduce DiffListBySkipList memory usage.  Contributed by (szetszwo: 
rev 871d0d39faa2c4c992d61ed20497dcf6c3faa376)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestDiffListBySkipList.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryDiffListFactory.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DiffListBySkipList.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java


> Reduce DiffListBySkipList memory usage
> --
>
> Key: HDFS-13223
> URL: https://issues.apache.org/jira/browse/HDFS-13223
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13223.001.patch, HDFS-13223.002.patch, 
> HDFS-13223.003.patch, HDFS-13223.004.patch, HDFS-13223.004_commit.patch
>
>
> There are several ways to reduce memory footprint of DiffListBySkipList.
> - Move maxSkipLevels and skipInterval to DirectoryDiffListFactory.
> - Use an array for skipDiffList instead of List.
> - Do not store the level 0 element in skipDiffList.
> - Do not create new ChildrenDiff for the same value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13222) Update getBlocks method to take minBlockSize in RPC calls

2018-03-06 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-13222:
---
Hadoop Flags: Reviewed

+1 patch looks good.

Will commit this tomorrow, if there is no more comments.

> Update getBlocks method to take minBlockSize in RPC calls
> -
>
> Key: HDFS-13222
> URL: https://issues.apache.org/jira/browse/HDFS-13222
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDFS-13222.00.patch, HDFS-13222.01.patch, 
> HDFS-13222.02.patch
>
>
>  
> getBlocks Using balancer parameter is done in this Jira HDFS-9412
>  
> Pass the Balancer conf value from Balancer to NN via getBlocks in each RPC. 
> as [~szetszwo] suggested
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13223) Reduce DiffListBySkipList memory usage

2018-03-06 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-13223:
---
   Resolution: Fixed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

I have committed this.  Thanks, Shashi!

> Reduce DiffListBySkipList memory usage
> --
>
> Key: HDFS-13223
> URL: https://issues.apache.org/jira/browse/HDFS-13223
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13223.001.patch, HDFS-13223.002.patch, 
> HDFS-13223.003.patch, HDFS-13223.004.patch, HDFS-13223.004_commit.patch
>
>
> There are several ways to reduce memory footprint of DiffListBySkipList.
> - Move maxSkipLevels and skipInterval to DirectoryDiffListFactory.
> - Use an array for skipDiffList instead of List.
> - Do not store the level 0 element in skipDiffList.
> - Do not create new ChildrenDiff for the same value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13223) Reduce DiffListBySkipList memory usage

2018-03-06 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-13223:
---
Attachment: HDFS-13223.004_commit.patch

> Reduce DiffListBySkipList memory usage
> --
>
> Key: HDFS-13223
> URL: https://issues.apache.org/jira/browse/HDFS-13223
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13223.001.patch, HDFS-13223.002.patch, 
> HDFS-13223.003.patch, HDFS-13223.004.patch, HDFS-13223.004_commit.patch
>
>
> There are several ways to reduce memory footprint of DiffListBySkipList.
> - Move maxSkipLevels and skipInterval to DirectoryDiffListFactory.
> - Use an array for skipDiffList instead of List.
> - Do not store the level 0 element in skipDiffList.
> - Do not create new ChildrenDiff for the same value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13223) Reduce DiffListBySkipList memory usage

2018-03-06 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-13223:
---
Hadoop Flags: Reviewed

+1 patch looks good.

There is a LineLength checkstyle warning in TestDiffListBySkipList.   Since it 
is very minor, I will just fix it before committing the patch.

> Reduce DiffListBySkipList memory usage
> --
>
> Key: HDFS-13223
> URL: https://issues.apache.org/jira/browse/HDFS-13223
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13223.001.patch, HDFS-13223.002.patch, 
> HDFS-13223.003.patch, HDFS-13223.004.patch
>
>
> There are several ways to reduce memory footprint of DiffListBySkipList.
> - Move maxSkipLevels and skipInterval to DirectoryDiffListFactory.
> - Use an array for skipDiffList instead of List.
> - Do not store the level 0 element in skipDiffList.
> - Do not create new ChildrenDiff for the same value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13225) StripeReader#checkMissingBlocks() 's IOException info is incomplete

2018-03-06 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388450#comment-16388450
 ] 

Hanisha Koneru commented on HDFS-13225:
---

Thanks for the contribution, [~figo].

The patch LGTM overall. Just one NIT - instead of removing '\n', can we replace 
it with a delimiter such as semi-colon. 

> StripeReader#checkMissingBlocks() 's IOException info is incomplete
> ---
>
> Key: HDFS-13225
> URL: https://issues.apache.org/jira/browse/HDFS-13225
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, hdfs-client
>Affects Versions: 3.1.0, 3.2.0
>Reporter: lufei
>Assignee: lufei
>Priority: Major
> Attachments: HDFS-13225.001.patch
>
>
> When the file's ErasureCodingPolicy is XOR-3-2-128k, then stop 3 datanodes 
> which were used by the block. With the following op(read the file), the 
> exception message was incomplete, because of the class of LocatedBlocks's 
> info was not displayed complete.
>  
> {color:#707070}hadoop@EC102:/home/lufei> hadoop fs -get 
> /lufei/fsimage_00268172191_140 test112{color}
> {color:#707070}get: 3 missing blocks, the stripe is: AlignedStripe(Offset=0, 
> length=131072, fetchedChunksNum=0, missingChunksNum=3); locatedBlocks is: 
> {color:#d04437}LocatedBlocks{{color}{color}
> {color:#707070}hadoop@EC102:/home/lufei>{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13109) Support fully qualified hdfs path in EZ commands

2018-03-06 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388377#comment-16388377
 ] 

Rushabh S Shah commented on HDFS-13109:
---

Thanks [~hanishakoneru] 
+1 non-binding pending jenkins for v5 patch.

> Support fully qualified hdfs path in EZ commands
> 
>
> Key: HDFS-13109
> URL: https://issues.apache.org/jira/browse/HDFS-13109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13109.001.patch, HDFS-13109.002.patch, 
> HDFS-13109.003.patch, HDFS-13109.004.patch, HDFS-13109.005.patch
>
>
> When creating an Encryption Zone, if the fully qualified path is specified in 
> the path argument, it throws the following error.
> {code:java}
> ~$ hdfs crypto -createZone -keyName mykey1 -path hdfs://ns1/zone1
> IllegalArgumentException: hdfs://ns1/zone1 is not the root of an encryption 
> zone. Do you mean /zone1?
> ~$ hdfs crypto -createZone -keyName mykey1 -path "hdfs://namenode:9000/zone2" 
> IllegalArgumentException: hdfs://namenode:9000/zone2 is not the root of an 
> encryption zone. Do you mean /zone2?
> {code}
> The EZ creation succeeds as the path is resolved in 
> DFS#createEncryptionZone(). But while creating the Trash directory, the path 
> is not resolved and it throws the above error.
>  A fully qualified path should be supported by {{crypto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13109) Support fully qualified hdfs path in EZ commands

2018-03-06 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388372#comment-16388372
 ] 

Hanisha Koneru commented on HDFS-13109:
---

Thank you [~xyao] and [~shahrs87] for the reviews and discussion.

I have updated patch v05 addressing Rushabh's comments on TestEncryptionZones. 

> Support fully qualified hdfs path in EZ commands
> 
>
> Key: HDFS-13109
> URL: https://issues.apache.org/jira/browse/HDFS-13109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13109.001.patch, HDFS-13109.002.patch, 
> HDFS-13109.003.patch, HDFS-13109.004.patch, HDFS-13109.005.patch
>
>
> When creating an Encryption Zone, if the fully qualified path is specified in 
> the path argument, it throws the following error.
> {code:java}
> ~$ hdfs crypto -createZone -keyName mykey1 -path hdfs://ns1/zone1
> IllegalArgumentException: hdfs://ns1/zone1 is not the root of an encryption 
> zone. Do you mean /zone1?
> ~$ hdfs crypto -createZone -keyName mykey1 -path "hdfs://namenode:9000/zone2" 
> IllegalArgumentException: hdfs://namenode:9000/zone2 is not the root of an 
> encryption zone. Do you mean /zone2?
> {code}
> The EZ creation succeeds as the path is resolved in 
> DFS#createEncryptionZone(). But while creating the Trash directory, the path 
> is not resolved and it throws the above error.
>  A fully qualified path should be supported by {{crypto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13109) Support fully qualified hdfs path in EZ commands

2018-03-06 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-13109:
--
Attachment: HDFS-13109.005.patch

> Support fully qualified hdfs path in EZ commands
> 
>
> Key: HDFS-13109
> URL: https://issues.apache.org/jira/browse/HDFS-13109
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13109.001.patch, HDFS-13109.002.patch, 
> HDFS-13109.003.patch, HDFS-13109.004.patch, HDFS-13109.005.patch
>
>
> When creating an Encryption Zone, if the fully qualified path is specified in 
> the path argument, it throws the following error.
> {code:java}
> ~$ hdfs crypto -createZone -keyName mykey1 -path hdfs://ns1/zone1
> IllegalArgumentException: hdfs://ns1/zone1 is not the root of an encryption 
> zone. Do you mean /zone1?
> ~$ hdfs crypto -createZone -keyName mykey1 -path "hdfs://namenode:9000/zone2" 
> IllegalArgumentException: hdfs://namenode:9000/zone2 is not the root of an 
> encryption zone. Do you mean /zone2?
> {code}
> The EZ creation succeeds as the path is resolved in 
> DFS#createEncryptionZone(). But while creating the Trash directory, the path 
> is not resolved and it throws the above error.
>  A fully qualified path should be supported by {{crypto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13176) WebHdfs file path gets truncated when having semicolon (;) inside

2018-03-06 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388339#comment-16388339
 ] 

Sean Mackrory commented on HDFS-13176:
--

Thanks to [~anu] for digging up the appropriate documentation of legal paths - 
I had missed it looking at HDFS-specific stuff, but the documentation is 
Common-wide, which makes sense. The gist is that ':' is indeed illegal, but all 
of the other characters you're testing and the others I mentioned (that I 
didn't want to support) are supposed to work. +1 to your fix - I'll commit it 
but will give another day in case the folks you tagged want to chime in. We may 
as well add other characters that are supposed to work to verify they do and 
help keep it that way, too.

> WebHdfs file path gets truncated when having semicolon (;) inside
> -
>
> Key: HDFS-13176
> URL: https://issues.apache.org/jira/browse/HDFS-13176
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.0.0
>Reporter: Zsolt Venczel
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HDFS-13176.01.patch, 
> TestWebHdfsUrl.testWebHdfsSpecialCharacterFile.patch
>
>
> Find attached a patch having a test case that tries to reproduce the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11807) libhdfs++: Get minidfscluster tests running under valgrind

2018-03-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388303#comment-16388303
 ] 

genericqa commented on HDFS-11807:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-8707 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
45s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
9s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m  
9s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
56m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} HDFS-8707 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 25m 
23s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}140m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:7831ad9 |
| JIRA Issue | HDFS-11807 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913223/HDFS-11807.HDFS-8707.009.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  javadoc  
mvninstall  shadedclient  |
| uname | Linux deecad928422 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-8707 / 7831ad9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23321/testReport/ |
| Max. process+thread count | 410 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23321/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Get minidfscluster tests running under valgrind
> --
>
> Key: HDFS-11807
> URL: https://issues.apache.org/jira/browse/HDFS-11807
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
>

[jira] [Updated] (HDFS-13148) Unit test for EZ with KMS and Federation

2018-03-06 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-13148:
--
Attachment: HDFS-13148.003.patch

> Unit test for EZ with KMS and Federation
> 
>
> Key: HDFS-13148
> URL: https://issues.apache.org/jira/browse/HDFS-13148
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13148.001.patch, HDFS-13148.002.patch, 
> HDFS-13148.003.patch
>
>
> It would be good to have some unit tests for testing KMS and EZ on a 
> federated cluster. We can start with basic EZ operations. For example, create 
> EZs on two namespaces with different keys using one KMS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13148) Unit test for EZ with KMS and Federation

2018-03-06 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388280#comment-16388280
 ] 

Hanisha Koneru commented on HDFS-13148:
---

Fixed checkstyle issues in patch v03.

> Unit test for EZ with KMS and Federation
> 
>
> Key: HDFS-13148
> URL: https://issues.apache.org/jira/browse/HDFS-13148
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDFS-13148.001.patch, HDFS-13148.002.patch, 
> HDFS-13148.003.patch
>
>
> It would be good to have some unit tests for testing KMS and EZ on a 
> federated cluster. We can start with basic EZ operations. For example, create 
> EZs on two namespaces with different keys using one KMS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13170) Port webhdfs unmaskedpermission parameter to HTTPFS

2018-03-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388242#comment-16388242
 ] 

Hudson commented on HDFS-13170:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13781 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13781/])
HDFS-13170. Port webhdfs unmaskedpermission parameter to HTTPFS. (xiao: rev 
9276ef066586a704f6898b670515029b5e3a20eb)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java


> Port webhdfs unmaskedpermission parameter to HTTPFS
> ---
>
> Key: HDFS-13170
> URL: https://issues.apache.org/jira/browse/HDFS-13170
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.1.0, 3.0.2
>
> Attachments: HDFS-13170.001.patch, HDFS-13170.002.patch, 
> HDFS-13170.003.patch, HDFS-13170.004.patch
>
>
> HDFS-6962 fixed a long standing issue where default ACLs are not correctly 
> applied to files when they are created from the hadoop shell.
> With this change, if you create a file with default ACLs against the parent 
> directory, with dfs.namenode.posix.acl.inheritance.enabled=false, the result 
> is:
> {code}
> # file: /test_acl/file_from_shell_off
> # owner: user1
> # group: supergroup
> user::rw-
> user:user1:rwx    #effective:r--
> user:user2:rwx    #effective:r--
> group::r-x    #effective:r--
> group:users:rwx    #effective:r--
> mask::r--
> other::r--
> {code}
> And if you enable this, to fix the bug above, the result is as you would 
> expect:
> {code}
> # file: /test_acl/file_from_shell
> # owner: user1
> # group: supergroup
> user::rw-
> user:user1:rwx    #effective:rw-
> user:user2:rwx    #effective:rw-
> group::r-x    #effective:r--
> group:users:rwx    #effective:rw-
> mask::rw-
> other::r--
> {code}
> If I then create a file over HTTPFS or webHDFS, the behaviour is not the same 
> as above:
> {code}
> # file: /test_acl/default_permissions
> # owner: user1
> # group: supergroup
> user::rwx
> user:user1:rwx    #effective:r-x
> user:user2:rwx    #effective:r-x
> group::r-x
> group:users:rwx    #effective:r-x
> mask::r-x
> other::r-x
> {code}
> Notice the mask is set to r-x and this remove the write permission on the new 
> file.
> As part of HDFS-6962 a new parameter was added to webhdfs 
> 'unmaskedpermission'. By passing it to a webhdfs call, it can result in the 
> same behaviour as when a file is written from the CLI:
> {code}
> curl -i -X PUT -T test.txt --header "Content-Type:application/octet-stream"  
> "http://namenode:50075/webhdfs/v1/test_acl/unmasked__770?op=CREATE=user1=namenode:8020=false=770;
> # file: /test_acl/unmasked__770
> # owner: user1
> # group: supergroup
> user::rwx
> user:user1:rwx
> user:user2:rwx
> group::r-x
> group:users:rwx
> mask::rwx
> other::---
> {code}
> However, this parameter was never ported to HTTPFS.
> This Jira is to replicate the same changes to HTTPFS so this parameter is 
> available there too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13188) Disk Balancer: Support multiple block pools during block move

2018-03-06 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388203#comment-16388203
 ] 

Bharat Viswanadham commented on HDFS-13188:
---

Thank You [~elgoiri] for review and committing the patch.

> Disk Balancer: Support multiple block pools during block move
> -
>
> Key: HDFS-13188
> URL: https://issues.apache.org/jira/browse/HDFS-13188
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13188.01.patch, HDFS-13188.02.patch, 
> HDFS-13188.03.patch, HDFS-13188.04.patch, HDFS-13188.05.patch
>
>
> During execute plan:
> *Federated setup:*
> When multiple block pools are there, it will only copy from blocks from first 
> block pool to destination disk, when balancing.
> We want to distribute the blocks from all block pools on source disk to 
> destination disk during balancing.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13170) Port webhdfs unmaskedpermission parameter to HTTPFS

2018-03-06 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13170:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.2
   3.1.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-3.1 and branch-3.0.
Thanks for the contribution [~sodonnell]!

> Port webhdfs unmaskedpermission parameter to HTTPFS
> ---
>
> Key: HDFS-13170
> URL: https://issues.apache.org/jira/browse/HDFS-13170
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.1.0, 3.0.2
>
> Attachments: HDFS-13170.001.patch, HDFS-13170.002.patch, 
> HDFS-13170.003.patch, HDFS-13170.004.patch
>
>
> HDFS-6962 fixed a long standing issue where default ACLs are not correctly 
> applied to files when they are created from the hadoop shell.
> With this change, if you create a file with default ACLs against the parent 
> directory, with dfs.namenode.posix.acl.inheritance.enabled=false, the result 
> is:
> {code}
> # file: /test_acl/file_from_shell_off
> # owner: user1
> # group: supergroup
> user::rw-
> user:user1:rwx    #effective:r--
> user:user2:rwx    #effective:r--
> group::r-x    #effective:r--
> group:users:rwx    #effective:r--
> mask::r--
> other::r--
> {code}
> And if you enable this, to fix the bug above, the result is as you would 
> expect:
> {code}
> # file: /test_acl/file_from_shell
> # owner: user1
> # group: supergroup
> user::rw-
> user:user1:rwx    #effective:rw-
> user:user2:rwx    #effective:rw-
> group::r-x    #effective:r--
> group:users:rwx    #effective:rw-
> mask::rw-
> other::r--
> {code}
> If I then create a file over HTTPFS or webHDFS, the behaviour is not the same 
> as above:
> {code}
> # file: /test_acl/default_permissions
> # owner: user1
> # group: supergroup
> user::rwx
> user:user1:rwx    #effective:r-x
> user:user2:rwx    #effective:r-x
> group::r-x
> group:users:rwx    #effective:r-x
> mask::r-x
> other::r-x
> {code}
> Notice the mask is set to r-x and this remove the write permission on the new 
> file.
> As part of HDFS-6962 a new parameter was added to webhdfs 
> 'unmaskedpermission'. By passing it to a webhdfs call, it can result in the 
> same behaviour as when a file is written from the CLI:
> {code}
> curl -i -X PUT -T test.txt --header "Content-Type:application/octet-stream"  
> "http://namenode:50075/webhdfs/v1/test_acl/unmasked__770?op=CREATE=user1=namenode:8020=false=770;
> # file: /test_acl/unmasked__770
> # owner: user1
> # group: supergroup
> user::rwx
> user:user1:rwx
> user:user2:rwx
> group::r-x
> group:users:rwx
> mask::rwx
> other::---
> {code}
> However, this parameter was never ported to HTTPFS.
> This Jira is to replicate the same changes to HTTPFS so this parameter is 
> available there too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13170) Port webhdfs unmaskedpermission parameter to HTTPFS

2018-03-06 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388197#comment-16388197
 ] 

Xiao Chen commented on HDFS-13170:
--

Thanks Stephen for revving! the new javadoc thing is because of misnaming, I 
can fix that at commit time.

> Port webhdfs unmaskedpermission parameter to HTTPFS
> ---
>
> Key: HDFS-13170
> URL: https://issues.apache.org/jira/browse/HDFS-13170
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-13170.001.patch, HDFS-13170.002.patch, 
> HDFS-13170.003.patch, HDFS-13170.004.patch
>
>
> HDFS-6962 fixed a long standing issue where default ACLs are not correctly 
> applied to files when they are created from the hadoop shell.
> With this change, if you create a file with default ACLs against the parent 
> directory, with dfs.namenode.posix.acl.inheritance.enabled=false, the result 
> is:
> {code}
> # file: /test_acl/file_from_shell_off
> # owner: user1
> # group: supergroup
> user::rw-
> user:user1:rwx    #effective:r--
> user:user2:rwx    #effective:r--
> group::r-x    #effective:r--
> group:users:rwx    #effective:r--
> mask::r--
> other::r--
> {code}
> And if you enable this, to fix the bug above, the result is as you would 
> expect:
> {code}
> # file: /test_acl/file_from_shell
> # owner: user1
> # group: supergroup
> user::rw-
> user:user1:rwx    #effective:rw-
> user:user2:rwx    #effective:rw-
> group::r-x    #effective:r--
> group:users:rwx    #effective:rw-
> mask::rw-
> other::r--
> {code}
> If I then create a file over HTTPFS or webHDFS, the behaviour is not the same 
> as above:
> {code}
> # file: /test_acl/default_permissions
> # owner: user1
> # group: supergroup
> user::rwx
> user:user1:rwx    #effective:r-x
> user:user2:rwx    #effective:r-x
> group::r-x
> group:users:rwx    #effective:r-x
> mask::r-x
> other::r-x
> {code}
> Notice the mask is set to r-x and this remove the write permission on the new 
> file.
> As part of HDFS-6962 a new parameter was added to webhdfs 
> 'unmaskedpermission'. By passing it to a webhdfs call, it can result in the 
> same behaviour as when a file is written from the CLI:
> {code}
> curl -i -X PUT -T test.txt --header "Content-Type:application/octet-stream"  
> "http://namenode:50075/webhdfs/v1/test_acl/unmasked__770?op=CREATE=user1=namenode:8020=false=770;
> # file: /test_acl/unmasked__770
> # owner: user1
> # group: supergroup
> user::rwx
> user:user1:rwx
> user:user2:rwx
> group::r-x
> group:users:rwx
> mask::rwx
> other::---
> {code}
> However, this parameter was never ported to HTTPFS.
> This Jira is to replicate the same changes to HTTPFS so this parameter is 
> available there too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13223) Reduce DiffListBySkipList memory usage

2018-03-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388165#comment-16388165
 ] 

genericqa commented on HDFS-13223:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 10 unchanged - 0 fixed = 11 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 49s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}167m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13223 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913211/HDFS-13223.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5bb44fc8ab07 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e6f99e2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23320/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Commented] (HDFS-13224) RBF: Mount points across multiple subclusters

2018-03-06 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388160#comment-16388160
 ] 

Wei Yan commented on HDFS-13224:


For the available space policy, it could follow the same idea in HDFS-8131. In 
short, using a higher probability (instead of "always") to choose the cluster 
with higher available space.

> RBF: Mount points across multiple subclusters
> -
>
> Key: HDFS-13224
> URL: https://issues.apache.org/jira/browse/HDFS-13224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13224.000.patch, HDFS-13224.001.patch, 
> HDFS-13224.002.patch
>
>
> Currently, a mount point points to a single subcluster. We should be able to 
> spread files in a mount point across subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13237) [Documentation] RBF: Mount points across multiple subclusters

2018-03-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13237:
---
Description: Document the feature to spread mount points across multiple 
subclusters.

> [Documentation] RBF: Mount points across multiple subclusters
> -
>
> Key: HDFS-13237
> URL: https://issues.apache.org/jira/browse/HDFS-13237
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
>
> Document the feature to spread mount points across multiple subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13237) [Documentation] RBF: Mount points across multiple subclusters

2018-03-06 Thread JIRA
Íñigo Goiri created HDFS-13237:
--

 Summary: [Documentation] RBF: Mount points across multiple 
subclusters
 Key: HDFS-13237
 URL: https://issues.apache.org/jira/browse/HDFS-13237
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Íñigo Goiri
Assignee: Íñigo Goiri






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13188) Disk Balancer: Support multiple block pools during block move

2018-03-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388147#comment-16388147
 ] 

Hudson commented on HDFS-13188:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13779 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13779/])
HDFS-13188. Disk Balancer: Support multiple block pools during block (inigoiri: 
rev 7060725662cb3317ff2f0fcc38f965fd23e8e6aa)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/DiskBalancerTestUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/TestDiskBalancer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/diskbalancer/TestDiskBalancerRPC.java


> Disk Balancer: Support multiple block pools during block move
> -
>
> Key: HDFS-13188
> URL: https://issues.apache.org/jira/browse/HDFS-13188
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13188.01.patch, HDFS-13188.02.patch, 
> HDFS-13188.03.patch, HDFS-13188.04.patch, HDFS-13188.05.patch
>
>
> During execute plan:
> *Federated setup:*
> When multiple block pools are there, it will only copy from blocks from first 
> block pool to destination disk, when balancing.
> We want to distribute the blocks from all block pools on source disk to 
> destination disk during balancing.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13224) RBF: Mount points across multiple subclusters

2018-03-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388145#comment-16388145
 ] 

Íñigo Goiri commented on HDFS-13224:


Thanks [~linyiqun] for the comments.
# I actually just went through the ones that were more relevant for our 
workloads, I didn't do an exhaustive pass. I'll do this in the next iteration.
# That sounds good. My only concern is the size of the patch (I'm even 
considering removing some stuff from the current one). What about doing it in a 
follow-up JIRA?

I'm thinking on adding an extra JIRA to document this carefully.

> RBF: Mount points across multiple subclusters
> -
>
> Key: HDFS-13224
> URL: https://issues.apache.org/jira/browse/HDFS-13224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13224.000.patch, HDFS-13224.001.patch, 
> HDFS-13224.002.patch
>
>
> Currently, a mount point points to a single subcluster. We should be able to 
> spread files in a mount point across subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue

2018-03-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388132#comment-16388132
 ] 

Íñigo Goiri commented on HDFS-13212:


In [^HDFS-13212-004.patch], does it make sense to remove the default location? 
Can you go deeper into this case?

FindBugs is not very happy either because of printing a well know null.

> RBF: Fix router location cache issue
> 
>
> Key: HDFS-13212
> URL: https://issues.apache.org/jira/browse/HDFS-13212
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Reporter: Weiwei Wu
>Priority: Major
> Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, 
> HDFS-13212-003.patch, HDFS-13212-004.patch
>
>
> The MountTableResolver refreshEntries function have a bug when add a new 
> mount table entry which already have location cache. The old location cache 
> will never be invalid until this mount point change again.
> Need to invalid the location cache when add the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13233) RBF:getMountPoint doesn't return the correct mount point of the mount table

2018-03-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388126#comment-16388126
 ] 

Íñigo Goiri commented on HDFS-13233:


Thanks [~striver.wang] for [^HDFS-13233.002.patch].
It looks good, I would add a javadoc to {{isParentEntry()}} giving a high level 
idea of the behavior and probably a couple examples there.

There are a couple failed unit tests that seem related:
* TestRouterQuota
* TestMountTableResolver

Do you mind taking a look?

> RBF:getMountPoint doesn't return the correct mount point of the mount table
> ---
>
> Key: HDFS-13233
> URL: https://issues.apache.org/jira/browse/HDFS-13233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: wangzhiyuan
>Priority: Major
> Attachments: HDFS-13233.001.patch, HDFS-13233.002.patch
>
>
> Method MountTableResolver->getMountPoint will traverse mount table and return 
> the first mount point which the input path starts with, but the condition is 
> not sufficient.
> Suppose the mount point table contains: "/user" "/user/test" "/user/test1", 
> if the input path is "/user/test111", the return mount point is 
> "/user/test1", but the correct one should be "/user".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13214) RBF: Configuration on Router conflicts with client side configuration

2018-03-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388119#comment-16388119
 ] 

Íñigo Goiri commented on HDFS-13214:


+1 on [^HDFS-13214.004.patch].

[~Tao Jie], is this enough?

> RBF: Configuration on Router conflicts with client side configuration
> -
>
> Key: HDFS-13214
> URL: https://issues.apache.org/jira/browse/HDFS-13214
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 2.9.0
>Reporter: Tao Jie
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDFS-13214.001.patch, HDFS-13214.002.patch, 
> HDFS-13214.003.patch, HDFS-13214.004.patch
>
>
> In a typical router-based federation cluster, hdfs-site.xml is supposed to be:
> {code}
> 
> dfs.nameservices
> ns1,ns2,ns-fed
>   
>   
> dfs.ha.namenodes.ns-fed
> r1,r2
>   
>   
> dfs.namenode.rpc-address.ns1
> host1:8020
>   
>   
> dfs.namenode.rpc-address.ns2
> host2:8020
>   
>   
> dfs.namenode.rpc-address.ns-fed.r1
> host1:
>   
>   
> dfs.namenode.rpc-address.ns-fed.r2
> host2:
>   
> {code}
> {{dfs.ha.namenodes.ns-fed}} here is used for client to access the Router. 
> However with this configuration on server node, Router fails to start with 
> error:
> {code}
> org.apache.hadoop.HadoopIllegalArgumentException: Configuration has multiple 
> addresses that match local node's address. Please configure the system with 
> dfs.nameservice.id and dfs.ha.namenode.id
> at org.apache.hadoop.hdfs.DFSUtil.getSuffixIDs(DFSUtil.java:1198)
> at org.apache.hadoop.hdfs.DFSUtil.getNameServiceId(DFSUtil.java:1131)
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNamenodeNameServiceId(DFSUtil.java:1086)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createLocalNamenodeHearbeatService(Router.java:466)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.createNamenodeHearbeatServices(Router.java:423)
> at 
> org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:199)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter.main(DFSRouter.java:69)
> 2018-03-01 18:05:56,208 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.DFSRouter: Failed to start 
> router
> {code}
> Then the router tries to find the local namenode, multiple properties: 
> {{dfs.namenode.rpc-address.ns1}}, {{dfs.namenode.rpc-address.ns-fed.r1}} 
> match the local address.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12615) Router-based HDFS federation phase 2

2018-03-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388116#comment-16388116
 ] 

Íñigo Goiri commented on HDFS-12615:


[~maobaolong], I had tried some similar to the one in YARN-3663 but I never 
went too far.

Feel free to create a task and start with it; just keep YARN-3663 as a 
reference.

> Router-based HDFS federation phase 2
> 
>
> Key: HDFS-12615
> URL: https://issues.apache.org/jira/browse/HDFS-12615
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: RBF
>
> This umbrella JIRA tracks set of improvements over the Router-based HDFS 
> federation (HDFS-10467).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13188) Disk Balancer: Support multiple block pools during block move

2018-03-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388113#comment-16388113
 ] 

Íñigo Goiri commented on HDFS-13188:


Thanks [~bharatviswa] for the ping, I had forgotten.

Committed to trunk.

> Disk Balancer: Support multiple block pools during block move
> -
>
> Key: HDFS-13188
> URL: https://issues.apache.org/jira/browse/HDFS-13188
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13188.01.patch, HDFS-13188.02.patch, 
> HDFS-13188.03.patch, HDFS-13188.04.patch, HDFS-13188.05.patch
>
>
> During execute plan:
> *Federated setup:*
> When multiple block pools are there, it will only copy from blocks from first 
> block pool to destination disk, when balancing.
> We want to distribute the blocks from all block pools on source disk to 
> destination disk during balancing.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13188) Disk Balancer: Support multiple block pools during block move

2018-03-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13188:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

> Disk Balancer: Support multiple block pools during block move
> -
>
> Key: HDFS-13188
> URL: https://issues.apache.org/jira/browse/HDFS-13188
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13188.01.patch, HDFS-13188.02.patch, 
> HDFS-13188.03.patch, HDFS-13188.04.patch, HDFS-13188.05.patch
>
>
> During execute plan:
> *Federated setup:*
> When multiple block pools are there, it will only copy from blocks from first 
> block pool to destination disk, when balancing.
> We want to distribute the blocks from all block pools on source disk to 
> destination disk during balancing.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13232) RBF: ConnectionPool should return first usable connection

2018-03-06 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388059#comment-16388059
 ] 

Wei Yan commented on HDFS-13232:


Agree. Will coordinate with HDFS-13230 to add test cases for ConnectionManager.

> RBF: ConnectionPool should return first usable connection
> -
>
> Key: HDFS-13232
> URL: https://issues.apache.org/jira/browse/HDFS-13232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
>
> In current ConnectionPool.getConnection(), it will return the first active 
> connection:
> {code:java}
> for (int i=0; i   int index = (threadIndex + i) % size;
>   conn = tmpConnections.get(index);
>   if (conn != null && !conn.isUsable()) {
> return conn;
>   }
> }
> {code}
> Here "!conn.isUsable()" should be "conn.isUsable()".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11807) libhdfs++: Get minidfscluster tests running under valgrind

2018-03-06 Thread Anatoli Shein (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388044#comment-16388044
 ] 

Anatoli Shein commented on HDFS-11807:
--

Thanks for another review, [~James C]!

In the new patch (009) I have addressed your comments as follows:

-check the return code on the socketpair() call
 (/) Done.

 

-You can't assume a file descriptor is going to be in the range [0,255]; you'll 
often get this for a process that has just spawned but that's just some luck. 
If the upper bit is set it's also going to be explicitly sign extended when 
it's promoted to a wider type which can make things a little confusing to 
debug. I'd also convert the binary value to a string or hex encoding because if 
the least significant byte of an int is 0 that's going to treated as a null 
terminator.
{code:java}
// The argument contains child socket
fd[childsocket] = (int)argv[1][0];
{code}
(/) Fixed: I am passing it as a string now.

 

-Since we're sharing this test with libhdfs which can build on windows we can't 
unconditionally include headers that windows won't have. Since this test also 
leans heavily on *nix style process management e.g. fork() it might be best to 
just avoid building this test on windows.
{code:java}
#include 
{code}
(i) Agreed, eventually we might look into making this test windows compatible.

 

Nitpicky stuff, not blockers but would clean things up a little:

-Don't need an extra cast when calling free()
{code:java}
free((char*)httpHost);
{code}
(/) Done.

 

Same idea when writing httpHost over the socket
{code:java}
ASSERT_INT64_EQ(read(fd[parentsocket], (char*)httpHost, hostSize), hostSize);
{code}
(/) Done.

 

-I'd change "fd[]" so "fds" or something plural to make it's clear that it's an 
array since a lot of C examples will name a single descriptor "fd". You can 
also just make a single int variable at the top of main() that gets assigned 
the appropriate side of the pair once you've forked just to avoid indexing 
(you'll still need the array to pass to socketpair).
 (/) Fixed: changed name from fd to fds.

> libhdfs++: Get minidfscluster tests running under valgrind
> --
>
> Key: HDFS-11807
> URL: https://issues.apache.org/jira/browse/HDFS-11807
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-11807.HDFS-8707.000.patch, 
> HDFS-11807.HDFS-8707.001.patch, HDFS-11807.HDFS-8707.002.patch, 
> HDFS-11807.HDFS-8707.003.patch, HDFS-11807.HDFS-8707.004.patch, 
> HDFS-11807.HDFS-8707.005.patch, HDFS-11807.HDFS-8707.006.patch, 
> HDFS-11807.HDFS-8707.007.patch, HDFS-11807.HDFS-8707.008.patch, 
> HDFS-11807.HDFS-8707.009.patch
>
>
> The gmock based unit tests generally don't expose race conditions and memory 
> stomps.  A good way to expose these is running libhdfs++ stress tests and 
> tools under valgrind and pointing them at a real cluster.  Right now the CI 
> tools don't do that so bugs occasionally slip in and aren't caught until they 
> cause trouble in applications that use libhdfs++ for HDFS access.
> The reason the minidfscluster tests don't run under valgrind is because the 
> GC and JIT compiler in the embedded JVM do things that look like errors to 
> valgrind.  I'd like to have these tests do some basic setup and then fork 
> into two processes: one for the minidfscluster stuff and one for the 
> libhdfs++ client test.  A small amount of shared memory can be used to 
> provide a place for the minidfscluster to stick the hdfsBuilder object that 
> the client needs to get info about which port to connect to.  Can also stick 
> a condition variable there to let the minidfscluster know when it can shut 
> down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13224) RBF: Mount points across multiple subclusters

2018-03-06 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16388042#comment-16388042
 ] 

Yiqun Lin commented on HDFS-13224:
--

[~elgoiri], the initial design and patch looks good overall. Just two comments:

1.In some RPC calls like {{setPermission}}, we do the check of 
{{(isPathAll(src)) and do the {{invokeConcurrent}}. But in some other places, 
we don't do this, any difference between these places? I mean that why we don't 
do the check \{{isPathAll(src))}} in all  Router server RPC calls. This looks a 
little confused.

2. One new order type that based on the available space in destination cluster 
may also needed. That is say, users will write files into the destination 
cluster which has the most available space.

> RBF: Mount points across multiple subclusters
> -
>
> Key: HDFS-13224
> URL: https://issues.apache.org/jira/browse/HDFS-13224
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-13224.000.patch, HDFS-13224.001.patch, 
> HDFS-13224.002.patch
>
>
> Currently, a mount point points to a single subcluster. We should be able to 
> spread files in a mount point across subclusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11807) libhdfs++: Get minidfscluster tests running under valgrind

2018-03-06 Thread Anatoli Shein (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anatoli Shein updated HDFS-11807:
-
Attachment: HDFS-11807.HDFS-8707.009.patch

> libhdfs++: Get minidfscluster tests running under valgrind
> --
>
> Key: HDFS-11807
> URL: https://issues.apache.org/jira/browse/HDFS-11807
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Anatoli Shein
>Priority: Major
> Attachments: HDFS-11807.HDFS-8707.000.patch, 
> HDFS-11807.HDFS-8707.001.patch, HDFS-11807.HDFS-8707.002.patch, 
> HDFS-11807.HDFS-8707.003.patch, HDFS-11807.HDFS-8707.004.patch, 
> HDFS-11807.HDFS-8707.005.patch, HDFS-11807.HDFS-8707.006.patch, 
> HDFS-11807.HDFS-8707.007.patch, HDFS-11807.HDFS-8707.008.patch, 
> HDFS-11807.HDFS-8707.009.patch
>
>
> The gmock based unit tests generally don't expose race conditions and memory 
> stomps.  A good way to expose these is running libhdfs++ stress tests and 
> tools under valgrind and pointing them at a real cluster.  Right now the CI 
> tools don't do that so bugs occasionally slip in and aren't caught until they 
> cause trouble in applications that use libhdfs++ for HDFS access.
> The reason the minidfscluster tests don't run under valgrind is because the 
> GC and JIT compiler in the embedded JVM do things that look like errors to 
> valgrind.  I'd like to have these tests do some basic setup and then fork 
> into two processes: one for the minidfscluster stuff and one for the 
> libhdfs++ client test.  A small amount of shared memory can be used to 
> provide a place for the minidfscluster to stick the hdfsBuilder object that 
> the client needs to get info about which port to connect to.  Can also stick 
> a condition variable there to let the minidfscluster know when it can shut 
> down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13223) Reduce DiffListBySkipList memory usage

2018-03-06 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387890#comment-16387890
 ] 

Shashikant Banerjee commented on HDFS-13223:


Patch v4 fixes the checkstyle issues.

> Reduce DiffListBySkipList memory usage
> --
>
> Key: HDFS-13223
> URL: https://issues.apache.org/jira/browse/HDFS-13223
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13223.001.patch, HDFS-13223.002.patch, 
> HDFS-13223.003.patch, HDFS-13223.004.patch
>
>
> There are several ways to reduce memory footprint of DiffListBySkipList.
> - Move maxSkipLevels and skipInterval to DirectoryDiffListFactory.
> - Use an array for skipDiffList instead of List.
> - Do not store the level 0 element in skipDiffList.
> - Do not create new ChildrenDiff for the same value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13223) Reduce DiffListBySkipList memory usage

2018-03-06 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-13223:
---
Attachment: HDFS-13223.004.patch

> Reduce DiffListBySkipList memory usage
> --
>
> Key: HDFS-13223
> URL: https://issues.apache.org/jira/browse/HDFS-13223
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13223.001.patch, HDFS-13223.002.patch, 
> HDFS-13223.003.patch, HDFS-13223.004.patch
>
>
> There are several ways to reduce memory footprint of DiffListBySkipList.
> - Move maxSkipLevels and skipInterval to DirectoryDiffListFactory.
> - Use an array for skipDiffList instead of List.
> - Do not store the level 0 element in skipDiffList.
> - Do not create new ChildrenDiff for the same value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13223) Reduce DiffListBySkipList memory usage

2018-03-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387784#comment-16387784
 ] 

genericqa commented on HDFS-13223:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 7 new + 10 unchanged - 0 fixed = 17 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}129m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}178m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestBlocksScheduledCounter |
|   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13223 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913178/HDFS-13223.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cf8ef4b96c70 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 12ecb55 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23318/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Commented] (HDFS-13170) Port webhdfs unmaskedpermission parameter to HTTPFS

2018-03-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387701#comment-16387701
 ] 

genericqa commented on HDFS-13170:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
50s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-httpfs: The 
patch generated 3 new + 397 unchanged - 7 fixed = 400 total (was 404) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 53s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-httpfs generated 2 new 
+ 5 unchanged - 0 fixed = 7 total (was 5) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
15s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13170 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913189/HDFS-13170.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux af09d0e0fd22 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 12ecb55 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23319/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23319/artifact/out/diff-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23319/testReport/ |
| Max. process+thread count | 684 (vs. ulimit of 

[jira] [Updated] (HDFS-13170) Port webhdfs unmaskedpermission parameter to HTTPFS

2018-03-06 Thread Stephen O'Donnell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-13170:
-
Attachment: HDFS-13170.004.patch

> Port webhdfs unmaskedpermission parameter to HTTPFS
> ---
>
> Key: HDFS-13170
> URL: https://issues.apache.org/jira/browse/HDFS-13170
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-13170.001.patch, HDFS-13170.002.patch, 
> HDFS-13170.003.patch, HDFS-13170.004.patch
>
>
> HDFS-6962 fixed a long standing issue where default ACLs are not correctly 
> applied to files when they are created from the hadoop shell.
> With this change, if you create a file with default ACLs against the parent 
> directory, with dfs.namenode.posix.acl.inheritance.enabled=false, the result 
> is:
> {code}
> # file: /test_acl/file_from_shell_off
> # owner: user1
> # group: supergroup
> user::rw-
> user:user1:rwx    #effective:r--
> user:user2:rwx    #effective:r--
> group::r-x    #effective:r--
> group:users:rwx    #effective:r--
> mask::r--
> other::r--
> {code}
> And if you enable this, to fix the bug above, the result is as you would 
> expect:
> {code}
> # file: /test_acl/file_from_shell
> # owner: user1
> # group: supergroup
> user::rw-
> user:user1:rwx    #effective:rw-
> user:user2:rwx    #effective:rw-
> group::r-x    #effective:r--
> group:users:rwx    #effective:rw-
> mask::rw-
> other::r--
> {code}
> If I then create a file over HTTPFS or webHDFS, the behaviour is not the same 
> as above:
> {code}
> # file: /test_acl/default_permissions
> # owner: user1
> # group: supergroup
> user::rwx
> user:user1:rwx    #effective:r-x
> user:user2:rwx    #effective:r-x
> group::r-x
> group:users:rwx    #effective:r-x
> mask::r-x
> other::r-x
> {code}
> Notice the mask is set to r-x and this remove the write permission on the new 
> file.
> As part of HDFS-6962 a new parameter was added to webhdfs 
> 'unmaskedpermission'. By passing it to a webhdfs call, it can result in the 
> same behaviour as when a file is written from the CLI:
> {code}
> curl -i -X PUT -T test.txt --header "Content-Type:application/octet-stream"  
> "http://namenode:50075/webhdfs/v1/test_acl/unmasked__770?op=CREATE=user1=namenode:8020=false=770;
> # file: /test_acl/unmasked__770
> # owner: user1
> # group: supergroup
> user::rwx
> user:user1:rwx
> user:user2:rwx
> group::r-x
> group:users:rwx
> mask::rwx
> other::---
> {code}
> However, this parameter was never ported to HTTPFS.
> This Jira is to replicate the same changes to HTTPFS so this parameter is 
> available there too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13170) Port webhdfs unmaskedpermission parameter to HTTPFS

2018-03-06 Thread Stephen O'Donnell (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387662#comment-16387662
 ] 

Stephen O'Donnell commented on HDFS-13170:
--

I should have remembered there were two places that syntax was used. V4 
uploaded now.

> Port webhdfs unmaskedpermission parameter to HTTPFS
> ---
>
> Key: HDFS-13170
> URL: https://issues.apache.org/jira/browse/HDFS-13170
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-13170.001.patch, HDFS-13170.002.patch, 
> HDFS-13170.003.patch, HDFS-13170.004.patch
>
>
> HDFS-6962 fixed a long standing issue where default ACLs are not correctly 
> applied to files when they are created from the hadoop shell.
> With this change, if you create a file with default ACLs against the parent 
> directory, with dfs.namenode.posix.acl.inheritance.enabled=false, the result 
> is:
> {code}
> # file: /test_acl/file_from_shell_off
> # owner: user1
> # group: supergroup
> user::rw-
> user:user1:rwx    #effective:r--
> user:user2:rwx    #effective:r--
> group::r-x    #effective:r--
> group:users:rwx    #effective:r--
> mask::r--
> other::r--
> {code}
> And if you enable this, to fix the bug above, the result is as you would 
> expect:
> {code}
> # file: /test_acl/file_from_shell
> # owner: user1
> # group: supergroup
> user::rw-
> user:user1:rwx    #effective:rw-
> user:user2:rwx    #effective:rw-
> group::r-x    #effective:r--
> group:users:rwx    #effective:rw-
> mask::rw-
> other::r--
> {code}
> If I then create a file over HTTPFS or webHDFS, the behaviour is not the same 
> as above:
> {code}
> # file: /test_acl/default_permissions
> # owner: user1
> # group: supergroup
> user::rwx
> user:user1:rwx    #effective:r-x
> user:user2:rwx    #effective:r-x
> group::r-x
> group:users:rwx    #effective:r-x
> mask::r-x
> other::r-x
> {code}
> Notice the mask is set to r-x and this remove the write permission on the new 
> file.
> As part of HDFS-6962 a new parameter was added to webhdfs 
> 'unmaskedpermission'. By passing it to a webhdfs call, it can result in the 
> same behaviour as when a file is written from the CLI:
> {code}
> curl -i -X PUT -T test.txt --header "Content-Type:application/octet-stream"  
> "http://namenode:50075/webhdfs/v1/test_acl/unmasked__770?op=CREATE=user1=namenode:8020=false=770;
> # file: /test_acl/unmasked__770
> # owner: user1
> # group: supergroup
> user::rwx
> user:user1:rwx
> user:user2:rwx
> group::r-x
> group:users:rwx
> mask::rwx
> other::---
> {code}
> However, this parameter was never ported to HTTPFS.
> This Jira is to replicate the same changes to HTTPFS so this parameter is 
> available there too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue

2018-03-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387653#comment-16387653
 ] 

genericqa commented on HDFS-13212:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 46s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
13s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}119m 19s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs |
|  |  Load of known null value in 
org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver.invalidateLocationCache(String)
  At MountTableResolver.java:in 
org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver.invalidateLocationCache(String)
  At MountTableResolver.java:[line 249] |
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13212 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913169/HDFS-13212-004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 64ca44f7d423 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 55ba49d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| 

[jira] [Commented] (HDFS-12090) Handling writes from HDFS to Provided storages

2018-03-06 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387644#comment-16387644
 ] 

Rakesh R commented on HDFS-12090:
-

{quote}
Keep in mind that when we perform a multipart-multinode upload, the multipart 
init and complete also need to be ordered. But I think we can do them from the 
tracker.
{quote}
Since {{internal SPS}} is tracking file-blocks movement at namenode, the 
multipart init and complete logic to be included at namenode side. Can we use 
[BlockMoveTaskHandler|https://github.com/apache/hadoop/blob/HDFS-10285/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/StoragePolicySatisfier.java#L472]
 and implement *init* logic in {{IntraSPSNameNodeBlockMoveTaskHandler}} class, 
which is meant for internal sps only. Maybe, we could change the method 
signature to pass array of {{blockMovingInfos}}. IIUC, *complete* call is 
invoked once all the blocks for a file is satisfied. If yes, we could provide 
an new hook for the file which is {{BLOCKS_ALREADY_SATISFIED}}, 
[here|https://github.com/apache/hadoop/blob/HDFS-10285/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/StoragePolicySatisfier.java#L335].

> Handling writes from HDFS to Provided storages
> --
>
> Key: HDFS-12090
> URL: https://issues.apache.org/jira/browse/HDFS-12090
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Virajith Jalaparti
>Priority: Major
> Attachments: HDFS-12090-Functional-Specification.001.pdf, 
> HDFS-12090-Functional-Specification.002.pdf, 
> HDFS-12090-Functional-Specification.003.pdf, HDFS-12090-design.001.pdf, 
> HDFS-12090..patch, HDFS-12090.0001.patch
>
>
> HDFS-9806 introduces the concept of {{PROVIDED}} storage, which makes data in 
> external storage systems accessible through HDFS. However, HDFS-9806 is 
> limited to data being read through HDFS. This JIRA will deal with how data 
> can be written to such {{PROVIDED}} storages from HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13223) Reduce DiffListBySkipList memory usage

2018-03-06 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-13223:
---
Attachment: HDFS-13223.003.patch

> Reduce DiffListBySkipList memory usage
> --
>
> Key: HDFS-13223
> URL: https://issues.apache.org/jira/browse/HDFS-13223
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13223.001.patch, HDFS-13223.002.patch, 
> HDFS-13223.003.patch
>
>
> There are several ways to reduce memory footprint of DiffListBySkipList.
> - Move maxSkipLevels and skipInterval to DirectoryDiffListFactory.
> - Use an array for skipDiffList instead of List.
> - Do not store the level 0 element in skipDiffList.
> - Do not create new ChildrenDiff for the same value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13223) Reduce DiffListBySkipList memory usage

2018-03-06 Thread Shashikant Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387584#comment-16387584
 ] 

Shashikant Banerjee commented on HDFS-13223:


Thanks [~szetszwo], for the review. Patch v3 addresses your review comments.

> Reduce DiffListBySkipList memory usage
> --
>
> Key: HDFS-13223
> URL: https://issues.apache.org/jira/browse/HDFS-13223
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDFS-13223.001.patch, HDFS-13223.002.patch, 
> HDFS-13223.003.patch
>
>
> There are several ways to reduce memory footprint of DiffListBySkipList.
> - Move maxSkipLevels and skipInterval to DirectoryDiffListFactory.
> - Use an array for skipDiffList instead of List.
> - Do not store the level 0 element in skipDiffList.
> - Do not create new ChildrenDiff for the same value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13233) RBF:getMountPoint doesn't return the correct mount point of the mount table

2018-03-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387510#comment-16387510
 ] 

genericqa commented on HDFS-13233:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 35s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}146m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}193m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.federation.router.TestRouterQuota |
|   | hadoop.hdfs.server.federation.resolver.TestMountTableResolver |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-13233 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913158/HDFS-13233.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2f431a0c5706 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 745190e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23316/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/23316/testReport/ |
| Max. process+thread count | 3505 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy

2018-03-06 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387507#comment-16387507
 ] 

Rakesh R commented on HDFS-13209:
-

{quote}However, sometime, we might need to keep all files in the same directory 
(consistency constraint) but might want some of them on SSD (small, in my case) 
until they are processed and merger/removed. Then they will go on the default 
policy.
{quote}
User can sets StoragePolicy to either a directory or a file, 
[fs#setStoragePolicy|https://hadoop.apache.org/docs/stable/api/org/apache/hadoop/fs/FileSystem.html#setStoragePolicy(org.apache.hadoop.fs.Path,%20java.lang.String)].
 I agree with you, presently there is no option to pass storage policy during a 
file creation and newly created file inherits the storage policy from its 
parent directory and continue writing blocks using this storage policy. I'm not 
against this new API proposal, but I could see this behavior could be achieved 
with an additional cost of FileSystem API call.

How about changing storage policy on a file, before writing contents to it. I'm 
trying an attempt to describe the steps, please go through and let me know if I 
missed anything.
{code:java}
Step-1) Assume parent directory "/myparent" configured with ALL_SSD policy.
Step-2) Now, creates a file "/myparent/myfile" under "/myparent" dir. It 
inherits ALL_SSD policy from its parent.
Step-3) Change storage policy of "/myparent/myfile" to "COLD" storage policy, 
which uses ARCHIVE storage type.
Step-4) Writes data to the file. Here, the data blocks will be written to 
ARCHIVE storage types.
{code}
{code:java}
Sample Code:-

String fileName = "/myparent/myfile";
final FSDataOutputStream out = dfs.create(new Path(fileName),
replicatonFactor);
dfs.setStoragePolicy(new Path(fileName), "COLD");
for (int i = 0; i < 1024; i++) {
  out.write(i);
}
out.close();
{code}

> DistributedFileSystem.create should allow an option to provide StoragePolicy
> 
>
> Key: HDFS-13209
> URL: https://issues.apache.org/jira/browse/HDFS-13209
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: Jean-Marc Spaggiari
>Priority: Major
>
> DistributedFileSystem.create allows to get a FSDataOutputStream. The stored 
> file and related blocks will used the directory based StoragePolicy.
>  
> However, sometime, we might need to keep all files in the same directory 
> (consistency constraint) but might want some of them on SSD (small, in my 
> case) until they are processed and merger/removed. Then they will go on the 
> default policy.
>  
> When creating a file, it will be useful to have an option to specify a 
> different StoragePolicy...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13212) RBF: Fix router location cache issue

2018-03-06 Thread Weiwei Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Wu updated HDFS-13212:
-
Attachment: HDFS-13212-004.patch

> RBF: Fix router location cache issue
> 
>
> Key: HDFS-13212
> URL: https://issues.apache.org/jira/browse/HDFS-13212
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Reporter: Weiwei Wu
>Priority: Major
> Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, 
> HDFS-13212-003.patch, HDFS-13212-004.patch
>
>
> The MountTableResolver refreshEntries function have a bug when add a new 
> mount table entry which already have location cache. The old location cache 
> will never be invalid until this mount point change again.
> Need to invalid the location cache when add the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13212) RBF: Fix router location cache issue

2018-03-06 Thread Weiwei Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387468#comment-16387468
 ] 

Weiwei Wu commented on HDFS-13212:
--

MountTableResolver will add a locationCache with null sourcePath, this will 
cause a null pointer exception when invalidateLocationCache try to remove it.

Add some code to fix this bug. 

> RBF: Fix router location cache issue
> 
>
> Key: HDFS-13212
> URL: https://issues.apache.org/jira/browse/HDFS-13212
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Reporter: Weiwei Wu
>Priority: Major
> Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, 
> HDFS-13212-003.patch
>
>
> The MountTableResolver refreshEntries function have a bug when add a new 
> mount table entry which already have location cache. The old location cache 
> will never be invalid until this mount point change again.
> Need to invalid the location cache when add the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12975) Changes to the NameNode to support reads from standby

2018-03-06 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387467#comment-16387467
 ] 

genericqa commented on HDFS-12975:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}148m 34s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}198m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.federation.router.TestRouterSafemode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f |
| JIRA Issue | HDFS-12975 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12913153/HDFS-12975.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 877bb6a3d317 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 745190e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| unit |