[jira] [Commented] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-11-08 Thread Zoran Dimitrijevic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245286#comment-16245286
 ] 

Zoran Dimitrijevic commented on HDFS-12052:
---

Thank you!




> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch, 
> HDFS-12052.05.patch, HDFS-12052.06.patch, HDFS-12052.07.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10323) transient deleteOnExit failure in ViewFileSystem due to close() ordering

2017-11-08 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HDFS-10323:
-
Attachment: HDFS-10323.003.patch

> transient deleteOnExit failure in ViewFileSystem due to close() ordering
> 
>
> Key: HDFS-10323
> URL: https://issues.apache.org/jira/browse/HDFS-10323
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 2.6.0, 2.7.4, 3.0.0-beta1
>Reporter: Ben Podgursky
>Assignee: Wenxin He
> Attachments: HDFS-10323.001.patch, HDFS-10323.002.patch, 
> HDFS-10323.003.patch
>
>
> After switching to using a ViewFileSystem, fs.deleteOnExit calls began 
> failing frequently, displaying this error on failure:
> 16/04/21 13:56:24 INFO fs.FileSystem: Ignoring failure to deleteOnExit for 
> path /tmp/delete_on_exit_test_123/a438afc0-a3ca-44f1-9eb5-010ca4a62d84
> Since FileSystem eats the error involved, it is difficult to be sure what the 
> error is, but I believe what is happening is that the ViewFileSystem’s child 
> FileSystems are being close()’d before the ViewFileSystem, due to the random 
> order ClientFinalizer closes FileSystems; so then when the ViewFileSystem 
> tries to close(), it tries to forward the delete() calls to the appropriate 
> child, and fails because the child is already closed.
> I’m unsure how to write an actual Hadoop test to reproduce this, since it 
> involves testing behavior on actual JVM shutdown.  However, I can verify that 
> while
> {code:java}
> fs.deleteOnExit(randomTemporaryDir);

> {code}
> regularly (~50% of the time) fails to delete the temporary directory, this 
> code:
> {code:java}
> ViewFileSystem viewfs = (ViewFileSystem)fs1;

> for (FileSystem fileSystem : viewfs.getChildFileSystems()) {
  
>   if (fileSystem.exists(randomTemporaryDir)) {

> fileSystem.deleteOnExit(randomTemporaryDir);
  
>   }
> 
}

> {code}
> always successfully deletes the temporary directory on JVM shutdown.
> I am not very familiar with FileSystem inheritance hierarchies, but at first 
> glance I see two ways to fix this behavior:
> 1)  ViewFileSystem could forward deleteOnExit calls to the appropriate child 
> FileSystem, and not hold onto that path itself.
> 2) FileSystem.Cache.closeAll could first close all ViewFileSystems, then all 
> other FileSystems.  
> Would appreciate any thoughts of whether this seems accurate, and thoughts 
> (or help) on the fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10323) transient deleteOnExit failure in ViewFileSystem due to close() ordering

2017-11-08 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HDFS-10323:
-
Status: Open  (was: Patch Available)

> transient deleteOnExit failure in ViewFileSystem due to close() ordering
> 
>
> Key: HDFS-10323
> URL: https://issues.apache.org/jira/browse/HDFS-10323
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0-beta1, 2.7.4, 2.6.0
>Reporter: Ben Podgursky
>Assignee: Wenxin He
> Attachments: HDFS-10323.001.patch, HDFS-10323.002.patch, 
> HDFS-10323.003.patch
>
>
> After switching to using a ViewFileSystem, fs.deleteOnExit calls began 
> failing frequently, displaying this error on failure:
> 16/04/21 13:56:24 INFO fs.FileSystem: Ignoring failure to deleteOnExit for 
> path /tmp/delete_on_exit_test_123/a438afc0-a3ca-44f1-9eb5-010ca4a62d84
> Since FileSystem eats the error involved, it is difficult to be sure what the 
> error is, but I believe what is happening is that the ViewFileSystem’s child 
> FileSystems are being close()’d before the ViewFileSystem, due to the random 
> order ClientFinalizer closes FileSystems; so then when the ViewFileSystem 
> tries to close(), it tries to forward the delete() calls to the appropriate 
> child, and fails because the child is already closed.
> I’m unsure how to write an actual Hadoop test to reproduce this, since it 
> involves testing behavior on actual JVM shutdown.  However, I can verify that 
> while
> {code:java}
> fs.deleteOnExit(randomTemporaryDir);

> {code}
> regularly (~50% of the time) fails to delete the temporary directory, this 
> code:
> {code:java}
> ViewFileSystem viewfs = (ViewFileSystem)fs1;

> for (FileSystem fileSystem : viewfs.getChildFileSystems()) {
  
>   if (fileSystem.exists(randomTemporaryDir)) {

> fileSystem.deleteOnExit(randomTemporaryDir);
  
>   }
> 
}

> {code}
> always successfully deletes the temporary directory on JVM shutdown.
> I am not very familiar with FileSystem inheritance hierarchies, but at first 
> glance I see two ways to fix this behavior:
> 1)  ViewFileSystem could forward deleteOnExit calls to the appropriate child 
> FileSystem, and not hold onto that path itself.
> 2) FileSystem.Cache.closeAll could first close all ViewFileSystems, then all 
> other FileSystems.  
> Would appreciate any thoughts of whether this seems accurate, and thoughts 
> (or help) on the fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10323) transient deleteOnExit failure in ViewFileSystem due to close() ordering

2017-11-08 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HDFS-10323:
-
Status: Patch Available  (was: Open)

> transient deleteOnExit failure in ViewFileSystem due to close() ordering
> 
>
> Key: HDFS-10323
> URL: https://issues.apache.org/jira/browse/HDFS-10323
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 3.0.0-beta1, 2.7.4, 2.6.0
>Reporter: Ben Podgursky
>Assignee: Wenxin He
> Attachments: HDFS-10323.001.patch, HDFS-10323.002.patch, 
> HDFS-10323.003.patch
>
>
> After switching to using a ViewFileSystem, fs.deleteOnExit calls began 
> failing frequently, displaying this error on failure:
> 16/04/21 13:56:24 INFO fs.FileSystem: Ignoring failure to deleteOnExit for 
> path /tmp/delete_on_exit_test_123/a438afc0-a3ca-44f1-9eb5-010ca4a62d84
> Since FileSystem eats the error involved, it is difficult to be sure what the 
> error is, but I believe what is happening is that the ViewFileSystem’s child 
> FileSystems are being close()’d before the ViewFileSystem, due to the random 
> order ClientFinalizer closes FileSystems; so then when the ViewFileSystem 
> tries to close(), it tries to forward the delete() calls to the appropriate 
> child, and fails because the child is already closed.
> I’m unsure how to write an actual Hadoop test to reproduce this, since it 
> involves testing behavior on actual JVM shutdown.  However, I can verify that 
> while
> {code:java}
> fs.deleteOnExit(randomTemporaryDir);

> {code}
> regularly (~50% of the time) fails to delete the temporary directory, this 
> code:
> {code:java}
> ViewFileSystem viewfs = (ViewFileSystem)fs1;

> for (FileSystem fileSystem : viewfs.getChildFileSystems()) {
  
>   if (fileSystem.exists(randomTemporaryDir)) {

> fileSystem.deleteOnExit(randomTemporaryDir);
  
>   }
> 
}

> {code}
> always successfully deletes the temporary directory on JVM shutdown.
> I am not very familiar with FileSystem inheritance hierarchies, but at first 
> glance I see two ways to fix this behavior:
> 1)  ViewFileSystem could forward deleteOnExit calls to the appropriate child 
> FileSystem, and not hold onto that path itself.
> 2) FileSystem.Cache.closeAll could first close all ViewFileSystems, then all 
> other FileSystems.  
> Would appreciate any thoughts of whether this seems accurate, and thoughts 
> (or help) on the fix.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-11-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245284#comment-16245284
 ] 

Xiao Chen commented on HDFS-12052:
--

Aha, thank you for the response. I skimmed through the comments and assumed 
this wasn't committed to branch-2 because of conflicts. Sorry for the wrong 
assumption.

Just tried backporting, and the conflict is pretty minor. I'll just go ahead 
and cherry-pick this to branch-2 if no objections. Thanks!

> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch, 
> HDFS-12052.05.patch, HDFS-12052.06.patch, HDFS-12052.07.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12732) Correct spellings of ramdomly to randomly in log.

2017-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245277#comment-16245277
 ] 

Hudson commented on HDFS-12732:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13208 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13208/])
HDFS-12732. Correct spellings of ramdomly to randomly in log. (aajisaka: rev 
3a3566e1d1ab5f78cfb734796b41802fe039196d)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java


> Correct spellings of ramdomly to randomly in log.
> -
>
> Key: HDFS-12732
> URL: https://issues.apache.org/jira/browse/HDFS-12732
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: hu xiaodong
>Assignee: hu xiaodong
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HDFS-12732.001.patch
>
>
> Correct spellings of ramdomly to randomly in log.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12790) [SPS]: Rebasing HDFS-10285 branch after HDFS-10467, HDFS-12599 and HDFS-11968 commits

2017-11-08 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245275#comment-16245275
 ] 

Rakesh R commented on HDFS-12790:
-

It seems, QA does {{HDFS-10285 Compile Tests}}, which is {{HDFS-10285 
compilation: pre-patch}}, without applying the patch and the compilation error 
is expected here. Like I mentioned in the jira description, after rebasing the 
branch is broken due to the trunk code changes. 
{code}
-1  mvninstall  8m 14s  root in HDFS-10285 failed.
-1  compile 0m 30s  hadoop-hdfs in HDFS-10285 failed.
{code}
{code}
[ERROR] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java:[139,7]
 error: RouterRpcServer is not abstract and does not override abstract method 
checkStoragePolicySatisfyPathStatus(String) in ClientProtocol
{code}

After applying the patch, it successfully compiling.
{code}
+1  mvninstall  0m 59s  the patch passed
+1  compile 0m 51s  the patch passed
{code}

I will attach another patch fixing the test case failures - 
TestViewFSStoragePolicyCommands and TestWebHDFSStoragePolicyCommands

> [SPS]: Rebasing HDFS-10285 branch after HDFS-10467, HDFS-12599 and HDFS-11968 
> commits
> -
>
> Key: HDFS-12790
> URL: https://issues.apache.org/jira/browse/HDFS-12790
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Rakesh R
>Assignee: Rakesh R
> Attachments: HDFS-12790-HDFS-10285-00.patch, 
> HDFS-12790-HDFS-10285-01.patch
>
>
> This task is a continuation with the periodic HDFS-10285 branch code rebasing 
> with the trunk code. To make branch code compile with the trunk code, it 
> needs to be refactored with the latest trunk code changes - HDFS-10467, 
> HDFS-12599 and HDFS-11968.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-11-08 Thread Zoran Dimitrijevic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245268#comment-16245268
 ] 

Zoran Dimitrijevic commented on HDFS-12052:
---

Sure. But I think it was exactly the same fix as for 3+. I did it for
Altiscale 2.7+ branch. What do you want me to do?




> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch, 
> HDFS-12052.05.patch, HDFS-12052.06.patch, HDFS-12052.07.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12732) Correct spellings of ramdomly to randomly in log.

2017-11-08 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245260#comment-16245260
 ] 

Akira Ajisaka edited comment on HDFS-12732 at 11/9/17 6:19 AM:
---

Committed this to trunk and branch-3.0. Thanks [~xiaodong.hu] for the 
contribution and thanks [~msingh] for the review.


was (Author: ajisakaa):
Committed this to trunk. Thanks [~xiaodong.hu] for the contribution and thanks 
[~msingh] for the review.

> Correct spellings of ramdomly to randomly in log.
> -
>
> Key: HDFS-12732
> URL: https://issues.apache.org/jira/browse/HDFS-12732
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: hu xiaodong
>Assignee: hu xiaodong
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HDFS-12732.001.patch
>
>
> Correct spellings of ramdomly to randomly in log.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12732) Correct spellings of ramdomly to randomly in log.

2017-11-08 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-12732:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Thanks [~xiaodong.hu] for the contribution and thanks 
[~msingh] for the review.

> Correct spellings of ramdomly to randomly in log.
> -
>
> Key: HDFS-12732
> URL: https://issues.apache.org/jira/browse/HDFS-12732
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha3
>Reporter: hu xiaodong
>Assignee: hu xiaodong
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HDFS-12732.001.patch
>
>
> Correct spellings of ramdomly to randomly in log.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12740) SCM should support a RPC to share the cluster Id with KSM and DataNodes

2017-11-08 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245247#comment-16245247
 ] 

Yiqun Lin edited comment on HDFS-12740 at 11/9/17 6:05 AM:
---

Thanks for working on this, [~shashikant]. Just some minor comments:

* Could you update name for {{scmId}} to be consistent? I saw sometimes we use 
{{scmId}}, but some other places are {{scmUuid}}.
{code}
*/
-  public String getscmUuid() {
+  public String getScmId() {
 return getStorageInfo().getProperty(SCM_ID);
   }
 
   @Override
   protected Properties getNodeProperties() {
-String scmUuid = getscmUuid();
+String scmUuid = getScmId();
 if (scmUuid == null) {
   scmUuid = UUID.randomUUID().toString();
 }
{code}

* We need a unit test to test the RPC call defined in 
{{ScmBlockLocationProtocolClientSideTranslatorPB}}  Current test just test the 
method {{StorageContainerManager#getScmInfo}}. I mean you should use a client 
to send a rpc call and verify the info that returned.


was (Author: linyiqun):
Thanks for working on this, [~shashikant]. Just some minor comment:

* Could you update name for {{scmId}} to be consistent? I saw sometimes we use 
{{scmId}}, but some other places are {{scmUuid}}.
{code}
*/
-  public String getscmUuid() {
+  public String getScmId() {
 return getStorageInfo().getProperty(SCM_ID);
   }
 
   @Override
   protected Properties getNodeProperties() {
-String scmUuid = getscmUuid();
+String scmUuid = getScmId();
 if (scmUuid == null) {
   scmUuid = UUID.randomUUID().toString();
 }
{code}

* We need a unit test to test the RPC call defined in 
{{ScmBlockLocationProtocolClientSideTranslatorPB}}  Current test just test the 
method {{StorageContainerManager#getScmInfo}}.

> SCM should support a RPC to share the cluster Id with KSM and DataNodes
> ---
>
> Key: HDFS-12740
> URL: https://issues.apache.org/jira/browse/HDFS-12740
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12740-HDFS-7240.001.patch
>
>
> When the ozone cluster is first Created, SCM --init command will generate 
> cluster Id as well as SCM Id and persist it locally. The same cluster Id and 
> the SCM id will be shared with KSM during the KSM initialization and 
> Datanodes during datanode registration. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12740) SCM should support a RPC to share the cluster Id with KSM and DataNodes

2017-11-08 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245247#comment-16245247
 ] 

Yiqun Lin commented on HDFS-12740:
--

Thanks for working on this, [~shashikant]. Just some minor comment:

* Could you update name for {{scmId}} to be consistent? I saw sometimes we use 
{{scmId}}, but some other places are {{scmUuid}}.
{code}
*/
-  public String getscmUuid() {
+  public String getScmId() {
 return getStorageInfo().getProperty(SCM_ID);
   }
 
   @Override
   protected Properties getNodeProperties() {
-String scmUuid = getscmUuid();
+String scmUuid = getScmId();
 if (scmUuid == null) {
   scmUuid = UUID.randomUUID().toString();
 }
{code}

* We need a unit test to test the RPC call defined in 
{{ScmBlockLocationProtocolClientSideTranslatorPB}}  Current test just test the 
method {{StorageContainerManager#getScmInfo}}.

> SCM should support a RPC to share the cluster Id with KSM and DataNodes
> ---
>
> Key: HDFS-12740
> URL: https://issues.apache.org/jira/browse/HDFS-12740
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12740-HDFS-7240.001.patch
>
>
> When the ozone cluster is first Created, SCM --init command will generate 
> cluster Id as well as SCM Id and persist it locally. The same cluster Id and 
> the SCM id will be shared with KSM during the KSM initialization and 
> Datanodes during datanode registration. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12793) Ozone : TestSCMCli is failing consistently

2017-11-08 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12793:
--
Status: Patch Available  (was: Open)

> Ozone : TestSCMCli is failing consistently
> --
>
> Key: HDFS-12793
> URL: https://issues.apache.org/jira/browse/HDFS-12793
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ozone
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12793-HDFS-7240.001.patch
>
>
> In the Jenkins build of HDFS-12787 and HDFS-12758, there are same three tests 
> in {{TestSCMCli}} that failed: {{testCloseContainer}}, 
> {{testDeleteContainer}} and {{testInfoContainer}}. I tested locally, these 
> three tests have been failing consistently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12793) Ozone : TestSCMCli is failing consistently

2017-11-08 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245245#comment-16245245
 ] 

Chen Liang commented on HDFS-12793:
---

The fail of the three tests were all caused by closing a container first, then 
check the status of the container, and found out the container is still in open 
status. (in {{ContainerMapping#closeContainer}}). The reason seems that in 
{{updateContainerState}}, after updating the container status, it should return 
{{updatedContainer.getState();}} instead of the original state.

> Ozone : TestSCMCli is failing consistently
> --
>
> Key: HDFS-12793
> URL: https://issues.apache.org/jira/browse/HDFS-12793
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ozone
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12793-HDFS-7240.001.patch
>
>
> In the Jenkins build of HDFS-12787 and HDFS-12758, there are same three tests 
> in {{TestSCMCli}} that failed: {{testCloseContainer}}, 
> {{testDeleteContainer}} and {{testInfoContainer}}. I tested locally, these 
> three tests have been failing consistently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12793) Ozone : TestSCMCli is failing consistently

2017-11-08 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12793:
--
Attachment: HDFS-12793-HDFS-7240.001.patch

> Ozone : TestSCMCli is failing consistently
> --
>
> Key: HDFS-12793
> URL: https://issues.apache.org/jira/browse/HDFS-12793
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ozone
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-12793-HDFS-7240.001.patch
>
>
> In the Jenkins build of HDFS-12787 and HDFS-12758, there are same three tests 
> in {{TestSCMCli}} that failed: {{testCloseContainer}}, 
> {{testDeleteContainer}} and {{testInfoContainer}}. I tested locally, these 
> three tests have been failing consistently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12618) fsck -includeSnapshots reports wrong amount of total blocks

2017-11-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245239#comment-16245239
 ] 

Xiao Chen commented on HDFS-12618:
--

Finally managed to review this with more details, together with Daryn's (great) 
comments. Also helpful to know Yahoo is able to run fsck on / frequently...

Looking again at the patch and comments, I think Daryn's comment regarding 
{{INodeReference.With\[Name|Count\]}} is not understood and reflected in this 
patch. It's not a safe assumption that {{iip.getLastINode()}} will be an 
{{INodeFile}} - for snapshots, that's usually an {{INodeReference.WithName}}.
Admittedly snapshot is very complicated and requires a lot of efforts (at least 
for me) to get things right. One way to look into it is probably with some 
examples, and debug from {{FSDirRenameOp$RenameOperation}} when a rename 
happens.

Related to the above, suggest we have unit test to cover renames (for the 
above, since delete doesn't trigger the same INode references link as rename 
does), and multiple snapshots (to cover when multiple snapshots having 
different but also overlapping blocks, so we test the not-the-last-WithName 
path).

Thanks.

> fsck -includeSnapshots reports wrong amount of total blocks
> ---
>
> Key: HDFS-12618
> URL: https://issues.apache.org/jira/browse/HDFS-12618
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HDFS-121618.initial, HDFS-12618.001.patch, 
> HDFS-12618.002.patch, HDFS-12618.003.patch
>
>
> When snapshot is enabled, if a file is deleted but is contained by a 
> snapshot, *fsck* will not reported blocks for such file, showing different 
> number of *total blocks* than what is exposed in the Web UI. 
> This should be fine, as *fsck* provides *-includeSnapshots* option. The 
> problem is that *-includeSnapshots* option causes *fsck* to count blocks for 
> every occurrence of a file on snapshots, which is wrong because these blocks 
> should be counted only once (for instance, if a 100MB file is present on 3 
> snapshots, it would still map to one block only in hdfs). This causes fsck to 
> report much more blocks than what actually exist in hdfs and is reported in 
> the Web UI.
> Here's an example:
> 1) HDFS has two files of 2 blocks each:
> {noformat}
> $ hdfs dfs -ls -R /
> drwxr-xr-x   - root supergroup  0 2017-10-07 21:21 /snap-test
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 /snap-test/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 /snap-test/file2
> drwxr-xr-x   - root supergroup  0 2017-05-13 13:03 /test
> {noformat} 
> 2) There are two snapshots, with the two files present on each of the 
> snapshots:
> {noformat}
> $ hdfs dfs -ls -R /snap-test/.snapshot
> drwxr-xr-x   - root supergroup  0 2017-10-07 21:21 
> /snap-test/.snapshot/snap1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 
> /snap-test/.snapshot/snap1/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 
> /snap-test/.snapshot/snap1/file2
> drwxr-xr-x   - root supergroup  0 2017-10-07 21:21 
> /snap-test/.snapshot/snap2
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 
> /snap-test/.snapshot/snap2/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 
> /snap-test/.snapshot/snap2/file2
> {noformat}
> 3) *fsck -includeSnapshots* reports 12 blocks in total (4 blocks for the 
> normal file path, plus 4 blocks for each snapshot path):
> {noformat}
> $ hdfs fsck / -includeSnapshots
> FSCK started by root (auth:SIMPLE) from /127.0.0.1 for path / at Mon Oct 09 
> 15:15:36 BST 2017
> Status: HEALTHY
>  Number of data-nodes:1
>  Number of racks: 1
>  Total dirs:  6
>  Total symlinks:  0
> Replicated Blocks:
>  Total size:  1258291200 B
>  Total files: 6
>  Total blocks (validated):12 (avg. block size 104857600 B)
>  Minimally replicated blocks: 12 (100.0 %)
>  Over-replicated blocks:  0 (0.0 %)
>  Under-replicated blocks: 0 (0.0 %)
>  Mis-replicated blocks:   0 (0.0 %)
>  Default replication factor:  1
>  Average block replication:   1.0
>  Missing blocks:  0
>  Corrupt blocks:  0
>  Missing replicas:0 (0.0 %)
> {noformat}
> 4) Web UI shows the correct number (4 blocks only):
> {noformat}
> Security is off.
> Safemode is off.
> 5 files and directories, 4 blocks = 9 total filesystem object(s).
> {noformat}
> I would like to work on this solution, will propose an initial solution 
> shortly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HDFS-12719) Ozone: Fix checkstyle, javac, whitespace issues in HDFS-7240 branch

2017-11-08 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12719:
-
Status: Patch Available  (was: Open)

> Ozone: Fix checkstyle, javac, whitespace issues in HDFS-7240 branch
> ---
>
> Key: HDFS-12719
> URL: https://issues.apache.org/jira/browse/HDFS-12719
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12719-HDFS-7240.001.patch, 
> HDFS-12719-HDFS-7240.002.patch, HDFS-12719-HDFS-7240.002.patch, 
> HDFS-12719-HDFS-7240.003.patch
>
>
> There are outstanding whitespace/javac/checkstyle issues on the HDFS-7240 
> branch. These were observed by uploading the branch diff to the trunk via 
> parent jira HDFS-7240. This jira will fix all the valid outstanding issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12758) Ozone: Correcting assertEquals argument order in test cases

2017-11-08 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245206#comment-16245206
 ] 

Chen Liang commented on HDFS-12758:
---

+1 on v00 patch. The failed tests are unrelated. I've committed to the feature 
branch. Thanks [~bharatviswa] for the contribution! 

> Ozone: Correcting assertEquals argument order in test cases
> ---
>
> Key: HDFS-12758
> URL: https://issues.apache.org/jira/browse/HDFS-12758
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-12758-HDFS-7240.00.patch
>
>
> In few test cases, the arguments to {{Assert.assertEquals}} is swapped. Below 
> is the list of classes and test-cases where this has to be corrected.
> {noformat}
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/ksm/TestKeySpaceManager.java
>  testChangeVolumeQuota - line: 187, 197 & 204
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/TestDistributedOzoneVolumes.java
>  testCreateVolumes - line: 91
>  testCreateVolumesWithQuota - line: 103
>  testCreateVolumesWithInvalidQuota - line: 115
>  testCreateVolumesWithInvalidUser - line: 129
>  testCreateVolumesWithOutAdminRights - line: 144
>  testCreateVolumesInLoop - line: 156
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
>  runTestPutKey - line: 239 & 246
>  runTestPutAndListKey - line: 428, 429, 451, 452, 458 & 459
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/transport/server/TestContainerServer.java
>  testClientServerWithContainerDispatcher - line: 219
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
>  verifyGetKey - line: 491
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
>  testUpdateContainer - line: 776, 778, 794, 796, 821 & 823
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
>  testGetVersion - line: 122 & 124
>  testRegister - line: 215
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/replication/TestContainerReplicationManager.java
>  testDetectSingleContainerReplica - line: 168
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/TestXceiverClientManager.java
>  testCaching - line: 82, 91, 96 & 97
>  testFreeByReference - line: 120, 130 & 137
>  testFreeByEviction - line: 165, 170, 177 & 185
> hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/ozone/TestOzoneAcls.java
>  testAclValues - line: 111, 112, 113, 116, 117, 118, 121, 122, 123, 126, 127, 
> 128, 131, 132, 133, 136, 137 & 138
> hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
>  testFileSystemInit - line: 102
>  testOzFsReadWrite - line: 123
>  testDirectory - line: 135, 138 & 139
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12758) Ozone: Correcting assertEquals argument order in test cases

2017-11-08 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12758:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Ozone: Correcting assertEquals argument order in test cases
> ---
>
> Key: HDFS-12758
> URL: https://issues.apache.org/jira/browse/HDFS-12758
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-12758-HDFS-7240.00.patch
>
>
> In few test cases, the arguments to {{Assert.assertEquals}} is swapped. Below 
> is the list of classes and test-cases where this has to be corrected.
> {noformat}
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/ksm/TestKeySpaceManager.java
>  testChangeVolumeQuota - line: 187, 197 & 204
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/TestDistributedOzoneVolumes.java
>  testCreateVolumes - line: 91
>  testCreateVolumesWithQuota - line: 103
>  testCreateVolumesWithInvalidQuota - line: 115
>  testCreateVolumesWithInvalidUser - line: 129
>  testCreateVolumesWithOutAdminRights - line: 144
>  testCreateVolumesInLoop - line: 156
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
>  runTestPutKey - line: 239 & 246
>  runTestPutAndListKey - line: 428, 429, 451, 452, 458 & 459
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/transport/server/TestContainerServer.java
>  testClientServerWithContainerDispatcher - line: 219
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
>  verifyGetKey - line: 491
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
>  testUpdateContainer - line: 776, 778, 794, 796, 821 & 823
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
>  testGetVersion - line: 122 & 124
>  testRegister - line: 215
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/replication/TestContainerReplicationManager.java
>  testDetectSingleContainerReplica - line: 168
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/TestXceiverClientManager.java
>  testCaching - line: 82, 91, 96 & 97
>  testFreeByReference - line: 120, 130 & 137
>  testFreeByEviction - line: 165, 170, 177 & 185
> hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/ozone/TestOzoneAcls.java
>  testAclValues - line: 111, 112, 113, 116, 117, 118, 121, 122, 123, 126, 127, 
> 128, 131, 132, 133, 136, 137 & 138
> hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
>  testFileSystemInit - line: 102
>  testOzFsReadWrite - line: 123
>  testDirectory - line: 135, 138 & 139
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12793) Ozone : TestSCMCli is failing consistently

2017-11-08 Thread Chen Liang (JIRA)
Chen Liang created HDFS-12793:
-

 Summary: Ozone : TestSCMCli is failing consistently
 Key: HDFS-12793
 URL: https://issues.apache.org/jira/browse/HDFS-12793
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ozone
Reporter: Chen Liang
Assignee: Chen Liang


In the Jenkins build of HDFS-12787 and HDFS-12758, there are same three tests 
in {{TestSCMCli}} that failed: {{testCloseContainer}}, {{testDeleteContainer}} 
and {{testInfoContainer}}. I tested locally, these three tests have been 
failing consistently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12776) [READ] Increasing replication for PROVIDED files should create local replicas

2017-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245187#comment-16245187
 ] 

Hadoop QA commented on HDFS-12776:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
37s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
41s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-tools/hadoop-fs2img in HDFS-9806 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} HDFS-9806 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} root: The patch generated 0 new + 102 unchanged - 1 
fixed = 102 total (was 103) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 53s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
30s{color} | {color:green} hadoop-fs2img in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}192m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12776 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896778/HDFS-12776-HDFS-9806.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 80b5b428741a 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-9806 / 757ff83 |
| maven | version: Apache 

[jira] [Commented] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets

2017-11-08 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245161#comment-16245161
 ] 

Weiwei Yang commented on HDFS-12638:


Hi [~shv]

Thanks, I see your point. I have increased the priority of this bug to 
critical, as it has a big risk to crash NN in a production cluster, with DN 
rolling update or creating some snapshots, this issue can be easily triggered. 
[~jingzhao], please share your thoughts on this. Thanks.

> NameNode exits due to ReplicationMonitor thread received Runtime exception in 
> ReplicationWork#chooseTargets
> ---
>
> Key: HDFS-12638
> URL: https://issues.apache.org/jira/browse/HDFS-12638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Priority: Critical
> Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, 
> OphanBlocksAfterTruncateDelete.jpg
>
>
> Active NamNode exit due to NPE, I can confirm that the BlockCollection passed 
> in when creating ReplicationWork is null, but I do not know why 
> BlockCollection is null, By view history I found 
> [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging  
> whether  BlockCollection is null.
> NN logs are as following:
> {code:java}
> 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets

2017-11-08 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12638:
---
Target Version/s: 3.0.0

> NameNode exits due to ReplicationMonitor thread received Runtime exception in 
> ReplicationWork#chooseTargets
> ---
>
> Key: HDFS-12638
> URL: https://issues.apache.org/jira/browse/HDFS-12638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Priority: Critical
> Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, 
> OphanBlocksAfterTruncateDelete.jpg
>
>
> Active NamNode exit due to NPE, I can confirm that the BlockCollection passed 
> in when creating ReplicationWork is null, but I do not know why 
> BlockCollection is null, By view history I found 
> [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging  
> whether  BlockCollection is null.
> NN logs are as following:
> {code:java}
> 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets

2017-11-08 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12638:
---
Priority: Critical  (was: Major)

> NameNode exits due to ReplicationMonitor thread received Runtime exception in 
> ReplicationWork#chooseTargets
> ---
>
> Key: HDFS-12638
> URL: https://issues.apache.org/jira/browse/HDFS-12638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
>Priority: Critical
> Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, 
> OphanBlocksAfterTruncateDelete.jpg
>
>
> Active NamNode exit due to NPE, I can confirm that the BlockCollection passed 
> in when creating ReplicationWork is null, but I do not know why 
> BlockCollection is null, By view history I found 
> [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging  
> whether  BlockCollection is null.
> NN logs are as following:
> {code:java}
> 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12459) Fix revert: Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2017-11-08 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12459:
---
Attachment: HDFS-12459.006.patch

> Fix revert: Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-12459
> URL: https://issues.apache.org/jira/browse/HDFS-12459
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12459.001.patch, HDFS-12459.002.patch, 
> HDFS-12459.003.patch, HDFS-12459.004.patch, HDFS-12459.005.patch, 
> HDFS-12459.006.patch
>
>
> HDFS-11156 was reverted because the implementation was non optimal, based on 
> the suggestion from [~shahrs87], we should avoid creating a dfs client to get 
> block locations because that create extra RPC call. Instead we should use 
> {{NamenodeProtocols#getBlockLocations}} then covert {{LocatedBlocks}} to 
> {{BlockLocation[]}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12459) Fix revert: Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2017-11-08 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245145#comment-16245145
 ] 

Weiwei Yang commented on HDFS-12459:


Hi [~shahrs87]

bq. JsonUtil should have method toJsonString#toJsonString(BlockLocations[]) 
just to be consistent with other methods

Done

bq. From the diff, it looks like you changed 
testWebHdfsGetBlockLocationsWithStorageType method which is not correct.

Well, that was to fix the checkstyle warning on the final local variable names. 
The diff seems a bit confusing I agree, I did not include that in v6 patch, 
lets see how jenkins report looks like.

Thanks

> Fix revert: Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-12459
> URL: https://issues.apache.org/jira/browse/HDFS-12459
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12459.001.patch, HDFS-12459.002.patch, 
> HDFS-12459.003.patch, HDFS-12459.004.patch, HDFS-12459.005.patch
>
>
> HDFS-11156 was reverted because the implementation was non optimal, based on 
> the suggestion from [~shahrs87], we should avoid creating a dfs client to get 
> block locations because that create extra RPC call. Instead we should use 
> {{NamenodeProtocols#getBlockLocations}} then covert {{LocatedBlocks}} to 
> {{BlockLocation[]}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12638) NameNode exits due to ReplicationMonitor thread received Runtime exception in ReplicationWork#chooseTargets

2017-11-08 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245134#comment-16245134
 ] 

Konstantin Shvachko commented on HDFS-12638:


[~wweic] {{addDeleteBlock()}} is supposed to be called, when the block is 
really intended to be deleted, that is it is not contained in any snapshots. I 
checked the code, there is a lot of logic around detecting which blocks belong 
or not to a snapshot, see e.g. {{INodeFile.collectBlocksBeyondSnapshot()}}. 
This makes it safe to delete the {{truncateBlock}}. Unless you have a test case 
as a counterexample. 

Did some digging and now understand why we don't see this in 2.7.4. The 
following line was introduced into {{addDeleteBlock()}} by HDFS-9754:
{code}
   assert toDelete != null : "toDelete is null";
+  toDelete.delete();
   toDeleteList.add(toDelete);
{code}
which sets {{Block.bcId = INVALID_INODE_ID}}. I think this was the wrong place 
to invalidate bcId, as [I mensioned 
earlier|https://issues.apache.org/jira/browse/HDFS-12638?focusedCommentId=16214120=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16214120].
[~jingzhao] could you please take a look.

> NameNode exits due to ReplicationMonitor thread received Runtime exception in 
> ReplicationWork#chooseTargets
> ---
>
> Key: HDFS-12638
> URL: https://issues.apache.org/jira/browse/HDFS-12638
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.2
>Reporter: Jiandan Yang 
> Attachments: HDFS-12638-branch-2.8.2.001.patch, HDFS-12638.002.patch, 
> OphanBlocksAfterTruncateDelete.jpg
>
>
> Active NamNode exit due to NPE, I can confirm that the BlockCollection passed 
> in when creating ReplicationWork is null, but I do not know why 
> BlockCollection is null, By view history I found 
> [HDFS-9754|https://issues.apache.org/jira/browse/HDFS-9754] remove judging  
> whether  BlockCollection is null.
> NN logs are as following:
> {code:java}
> 2017-10-11 16:29:06,161 ERROR [ReplicationMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: 
> ReplicationMonitor thread received Runtime exception.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.ReplicationWork.chooseTargets(ReplicationWork.java:55)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWorkForBlocks(BlockManager.java:1532)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeReplicationWork(BlockManager.java:1491)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.computeDatanodeWork(BlockManager.java:3792)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:3744)
> at java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12665) [AliasMap] Create a version of the AliasMap that runs in memory in the Namenode (leveldb)

2017-11-08 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245119#comment-16245119
 ] 

Virajith Jalaparti commented on HDFS-12665:
---

Thanks for the updated patch [~ehiggs]. I took a look at it and made the 
following changes for v5 (attached now).

- removed unnecessary parameters in hdfs-default.xml 
({{dfs.datanode.block.provider.class}}, {{dfs.provided.blockformat.class}}, 
{{dfs.namenode.block.provider.class}})
- Changed the names of the new parameters (in {{DFSConfigKeys}}) to add more 
context to them.
- Rename {{ITAliasMap}} to {{ITestInMemoryAliasMap}} and {{TestAliasMap}} to 
{{TestInMemoryAliasMap}}
- Reverted change from {{TestFileRegionBlockAliasMap}} to 
{{TestFileRegionProvider}}
- Added back the checks for block pool id in 
{{ProvidedVolumeImpl.compileReport}} and {{ProviderBlockIteratorImpl#nextBlock}}
- modified some of comments as they were circular.
- Fixed some checkstyle issues.
- Renamed {{InMemoryAliasMapProtocolTranslatorPB}} to 
{{InMemoryAliasMapProtocolClientSideTranslatorPB}}
- Added a {{DFS_PROVIDED_ALIASMAP_INMEMORY_RPC_ADDRESS_DEFAULT}} as a default 
RPC address for the In memory alias map
- Replace {{assertThat}} with {{assertTrue}}, {{assertFalse}} and 
{{assertEquals}} as required, in various places.
- Move the tests from {{TestMultiThreadedAliasMapClient}} to 
{{TestInMemoryLevelDBAliasMapClient}}
- I don't understand what {{TestNameNodeLevelDbProvidedImplementation}} really 
tests. The {{writeRead()}} and {{list()}}
methods seem to be creating files even though we can't really create provided 
files. Also, the {{createImage()}} function, doesn't traverse the provided 
directory to create an image. So, it is not really testing the use of the 
InMemoryAliasMap by the NN and DN. I replaced these tests with 
{{TestNameNodeProvidedImplementation#testInMemoryAliasMap}}.
- Reverted the change in {{TreePath#writeBlock}} to retain the block pool id.

A few questions:
- {{InMemoryAliasMap}} uses 
{{LoggerFactory.getLogger(InMemoryLevelDBAliasMapServer.class)}}. Was this 
intentional? 
- Should we rename {{InMemoryAliasMap}} to {{InMemoryLevelDBAliasMap}}, similar 
to the other classes?
- Why add a new {{iterator()}} to {{BlockAliasMap}} and not use the 
{{BlockAliasMap.Reader}}?
 
*Blockers which need to be fixed*:
# {{TestNameNodeProvidedImplementation#testInMemoryAliasMap}} fails. This is 
because the block pool id is set to NULL for all FileRegions written to 
leveldb. As a result, the {{ProvidedVolumeImpl}} doesn't the blocks properly. 
We need to store the block pool id along with the blocks. My proposal for now 
is to just have {{FileRegion}} as the value in leveldb so that block pool id 
can be retrieved. Breaking down {{FileRegion}} into a (key,value) pair can be 
done as part of HDFS-12713. What do you think?
# {{InMemoryLevelDBAliasMapClient#LevelDbWriter}} doesn't work without a 
running server. The image generation tool doesn't start a server before using 
{{BlockAliasMap#getWriter()}}. {{LevelDbWriter}} needs to be modified to make 
sure that it alone is sufficient to write information on block aliases to the 
level db store.

> [AliasMap] Create a version of the AliasMap that runs in memory in the 
> Namenode (leveldb)
> -
>
> Key: HDFS-12665
> URL: https://issues.apache.org/jira/browse/HDFS-12665
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
> Attachments: HDFS-12665-HDFS-9806.001.patch, 
> HDFS-12665-HDFS-9806.002.patch, HDFS-12665-HDFS-9806.003.patch, 
> HDFS-12665-HDFS-9806.004.patch, HDFS-12665-HDFS-9806.005.patch
>
>
> The design of Provided Storage requires the use of an AliasMap to manage the 
> mapping between blocks of files on the local HDFS and ranges of files on a 
> remote storage system. To reduce load from the Namenode, this can be done 
> using a pluggable external service (e.g. AzureTable, Cassandra, Ratis). 
> However, to aide adoption and ease of deployment, we propose an in memory 
> version.
> This AliasMap will be a wrapper around LevelDB (already a dependency from the 
> Timeline Service) and use protobuf for the key (blockpool, blockid, and 
> genstamp) and the value (url, offset, length, nonce). The in memory service 
> will also have a configurable port on which it will listen for updates from 
> Storage Policy Satisfier (SPS) Coordinating Datanodes (C-DN).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12754) Lease renewal can hit a deadlock

2017-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245110#comment-16245110
 ] 

Hadoop QA commented on HDFS-12754:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 41s{color} 
| {color:red} hadoop-hdfs-project generated 1 new + 426 unchanged - 0 fixed = 
427 total (was 426) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 
225 unchanged - 0 fixed = 228 total (was 225) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
28s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}145m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
31s{color} | {color:red} The patch generated 61 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}239m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:5 |
| Failed junit tests | hadoop.hdfs.TestReservedRawPaths |
|   | hadoop.hdfs.qjournal.client.TestQJMWithFaults |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.TestDecommissionWithStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |

[jira] [Updated] (HDFS-12665) [AliasMap] Create a version of the AliasMap that runs in memory in the Namenode (leveldb)

2017-11-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12665:
--
Attachment: HDFS-12665-HDFS-9806.005.patch

> [AliasMap] Create a version of the AliasMap that runs in memory in the 
> Namenode (leveldb)
> -
>
> Key: HDFS-12665
> URL: https://issues.apache.org/jira/browse/HDFS-12665
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
> Attachments: HDFS-12665-HDFS-9806.001.patch, 
> HDFS-12665-HDFS-9806.002.patch, HDFS-12665-HDFS-9806.003.patch, 
> HDFS-12665-HDFS-9806.004.patch, HDFS-12665-HDFS-9806.005.patch
>
>
> The design of Provided Storage requires the use of an AliasMap to manage the 
> mapping between blocks of files on the local HDFS and ranges of files on a 
> remote storage system. To reduce load from the Namenode, this can be done 
> using a pluggable external service (e.g. AzureTable, Cassandra, Ratis). 
> However, to aide adoption and ease of deployment, we propose an in memory 
> version.
> This AliasMap will be a wrapper around LevelDB (already a dependency from the 
> Timeline Service) and use protobuf for the key (blockpool, blockid, and 
> genstamp) and the value (url, offset, length, nonce). The in memory service 
> will also have a configurable port on which it will listen for updates from 
> Storage Policy Satisfier (SPS) Coordinating Datanodes (C-DN).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12758) Ozone: Correcting assertEquals argument order in test cases

2017-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245086#comment-16245086
 ] 

Hadoop QA commented on HDFS-12758:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
4s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
22s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
48s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
36s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-ozone in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}214m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.scm.TestSCMCli |
|   | hadoop.ozone.TestStorageContainerManager |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | HDFS-12758 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896734/HDFS-12758-HDFS-7240.00.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 348adfdf1d67 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Commented] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit

2017-11-08 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245081#comment-16245081
 ] 

Tsz Wo Nicholas Sze commented on HDFS-12594:


/SnapshotDiffReportListing.j
- In DiffReportListingEntry, getSourcePath(), getTargetPath() and getParent() 
should return byte[][].
It is inefficient to convert byte[] to byte[][].  E.g. in INODE_COMPARATOR, 
getParent() converts it to byte[] and then the DiffReportListingEntry 
constructor convert it back to byte[][].

- In ChildrenDiff, by the constructor, createdList is never null so that we 
should not check it.
{code}
+public List getCreatedList() {
+  if (createdList == null) {
+return Collections.emptyList();
+  } else {
+return createdList;
+  }
+}
{code}
-* Similarly for getDeletedList().
-* addCreatedList and addDeletedList are not used.  Please remove them.

- {{Collections. emptyList()}} can drops the type 
parameter, i.e. {{Collections.emptyList()}}.
-* Similarly, change {{new HashMap()}}  to {{new 
HashMap<>()}}

- In SnapshotDiffReportListing, getTotalEntries() and all getModifyListSize() + 
getCreateListSize() + getDeleteListSize() are not used. Please remove them.

- Do not call clone().  It is expensive.  E.g. why first clone the startPath 
and then convert it to String?
{code}
  startPath = DFSUtilClient.bytes2String(report.getStartPath());
{code}

- Wrong javadoc?
{code}
+  /**
+   * store the starting path to process across RPC's for snapshot diff.
+   */
+  private final boolean isFromEarlier;
{code}

- Use long instead of Long, boolean instead of Boolean, etc.


> SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC 
> response limit
> ---
>
> Key: HDFS-12594
> URL: https://issues.apache.org/jira/browse/HDFS-12594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Attachments: HDFS-12594.001.patch, HDFS-12594.002.patch, 
> HDFS-12594.003.patch, HDFS-12594.004.patch, HDFS-12594.005.patch, 
> SnapshotDiff_Improvemnets .pdf
>
>
> The snapshotDiff command fails if the snapshotDiff report size is larger than 
> the configuration value of ipc.maximum.response.length which is by default 
> 128 MB. 
> Worst case, with all Renames ops in sanpshots each with source and target 
> name equal to MAX_PATH_LEN which is 8k characters, this would result in at 
> 8192 renames.
>  
> SnapshotDiff is currently used by distcp to optimize copy operations and in 
> case of the the diff report exceeding the limit , it fails with the below 
> exception:
> Test set: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 112.095 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport
> testDiffReportWithMillionFiles(org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport)
>   Time elapsed: 111.906 sec  <<< ERROR!
> java.io.IOException: Failed on local exception: 
> org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length; 
> Host Details : local host is: "hw15685.local/10.200.5.230"; destination host 
> is: "localhost":59808;
> Attached is the proposal for the changes required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12776) [READ] Increasing replication for PROVIDED files should create local replicas

2017-11-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12776:
--
Attachment: HDFS-12776-HDFS-9806.001.patch

> [READ] Increasing replication for PROVIDED files should create local replicas
> -
>
> Key: HDFS-12776
> URL: https://issues.apache.org/jira/browse/HDFS-12776
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12776-HDFS-9806.001.patch
>
>
> For PROVIDED files, set replication only works when the target datanode does 
> not have a PROVIDED volume. In a cluster, where all Datanodes have PROVIDED 
> volumes, set replication does not work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12776) [READ] Increasing replication for PROVIDED files should create local replicas

2017-11-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12776:
--
Status: Open  (was: Patch Available)

> [READ] Increasing replication for PROVIDED files should create local replicas
> -
>
> Key: HDFS-12776
> URL: https://issues.apache.org/jira/browse/HDFS-12776
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12776-HDFS-9806.001.patch
>
>
> For PROVIDED files, set replication only works when the target datanode does 
> not have a PROVIDED volume. In a cluster, where all Datanodes have PROVIDED 
> volumes, set replication does not work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12776) [READ] Increasing replication for PROVIDED files should create local replicas

2017-11-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12776:
--
Status: Patch Available  (was: Open)

> [READ] Increasing replication for PROVIDED files should create local replicas
> -
>
> Key: HDFS-12776
> URL: https://issues.apache.org/jira/browse/HDFS-12776
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12776-HDFS-9806.001.patch
>
>
> For PROVIDED files, set replication only works when the target datanode does 
> not have a PROVIDED volume. In a cluster, where all Datanodes have PROVIDED 
> volumes, set replication does not work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12776) [READ] Increasing replication for PROVIDED files should create local replicas

2017-11-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12776:
--
Attachment: (was: HDFS-12776-HDFS-9806.001.patch)

> [READ] Increasing replication for PROVIDED files should create local replicas
> -
>
> Key: HDFS-12776
> URL: https://issues.apache.org/jira/browse/HDFS-12776
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12776-HDFS-9806.001.patch
>
>
> For PROVIDED files, set replication only works when the target datanode does 
> not have a PROVIDED volume. In a cluster, where all Datanodes have PROVIDED 
> volumes, set replication does not work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12549) Ozone: OzoneClient: Support for REST protocol

2017-11-08 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12549:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

Thanks [~nandakumar131] for the contribution. I've committed the patch to the 
feature branch. 

> Ozone: OzoneClient: Support for REST protocol
> -
>
> Key: HDFS-12549
> URL: https://issues.apache.org/jira/browse/HDFS-12549
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12549-HDFS-7240.000.patch, 
> HDFS-12549-HDFS-7240.001.patch, HDFS-12549-HDFS-7240.002.patch, 
> HDFS-12549-HDFS-7240.003.patch, HDFS-12549-HDFS-7240.004.patch
>
>
> Support for REST protocol in OzoneClient. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12735) Make ContainerStateMachine#applyTransaction async

2017-11-08 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244940#comment-16244940
 ] 

Tsz Wo Nicholas Sze commented on HDFS-12735:


It needs a 
[TaskQueue|https://github.com/apache/incubator-ratis/blob/master/ratis-common/src/main/java/org/apache/ratis/util/TaskQueue.java]
 per object.  I actually created it for the Ozone use case.
{code}
/**
 * A queue with execution order guarantee such that
 * each task is submitted for execution only if it becomes the head of the 
queue.
 * Tasks are executed sequentially without any overlap.
 *
 * By the definition of a queue, a task can become the head iff
 * (1) the queue is empty when offering it, or
 * (2) it is the next to the head and the head is polled out from the queue.
 *
 * A typically use case is to submit concurrent tasks
 * with in-order guarantee for some of the tasks.
 *
 * One example use case is to submit tasks to write multiple files:
 * - A file may requires multiple write tasks.
 * - Multiple files are written at the same time.
 * A solution is to create a {@link TaskQueue} for each file
 * and then submit the write tasks to the corresponding queue.
 * The files will be written concurrently and the writes to each file are 
in-order.
 */
{code}


> Make ContainerStateMachine#applyTransaction async
> -
>
> Key: HDFS-12735
> URL: https://issues.apache.org/jira/browse/HDFS-12735
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>  Labels: performance
> Attachments: HDFS-12735-HDFS-7240.000.patch, 
> HDFS-12735-HDFS-7240.001.patch
>
>
> Currently ContainerStateMachine#applyTransaction makes a synchronous call to 
> dispatch client requests. Idea is to have a thread pool which dispatches 
> client requests and returns a CompletableFuture.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12549) Ozone: OzoneClient: Support for REST protocol

2017-11-08 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244932#comment-16244932
 ] 

Xiaoyu Yao commented on HDFS-12549:
---

Thanks [~nandakumar131] for the update. Patch looks good to me, +1. I will 
commit it shortly.

> Ozone: OzoneClient: Support for REST protocol
> -
>
> Key: HDFS-12549
> URL: https://issues.apache.org/jira/browse/HDFS-12549
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Attachments: HDFS-12549-HDFS-7240.000.patch, 
> HDFS-12549-HDFS-7240.001.patch, HDFS-12549-HDFS-7240.002.patch, 
> HDFS-12549-HDFS-7240.003.patch, HDFS-12549-HDFS-7240.004.patch
>
>
> Support for REST protocol in OzoneClient. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12640) libhdfs++: automatic CI tests are getting stuck in test_libhdfs_mini_stress_hdfspp_test_shim_static

2017-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244900#comment-16244900
 ] 

Hadoop QA commented on HDFS-12640:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 
58s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-8707 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
10s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
31s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
37s{color} | {color:green} HDFS-8707 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}324m 53s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}407m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed CTEST tests | memcheck_hdfs_config_connect_bugs |
|   | test_hdfs_ext_hdfspp_test_shim_static |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:3117e2a |
| JIRA Issue | HDFS-12640 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896677/HDFS-12640.HDFS-8707.000.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 4299e5087d71 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-8707 / 9d35dff |
| maven | version: Apache Maven 3.0.5 |
| Default Java | 1.7.0_151 |
| CTEST | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22006/artifact/out/patch-hadoop-hdfs-project_hadoop-hdfs-native-client-ctest.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22006/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22006/testReport/ |
| Max. process+thread count | 254 (vs. ulimit of 5000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22006/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: automatic CI tests are getting stuck in 
> test_libhdfs_mini_stress_hdfspp_test_shim_static
> ---
>
> Key: HDFS-12640
> URL: https://issues.apache.org/jira/browse/HDFS-12640
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-12640.HDFS-8707.000.patch
>
>
> All of the automated tests seem to get stuck, or at least stop generating 
> useful output, in 

[jira] [Commented] (HDFS-12549) Ozone: OzoneClient: Support for REST protocol

2017-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244890#comment-16244890
 ] 

Hadoop QA commented on HDFS-12549:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
25s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
10s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
34s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
4 unchanged - 0 fixed = 5 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
30s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}148m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}227m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:2 |
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.ozone.scm.TestSCMCli |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
|   | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d11161b |
| JIRA Issue | 

[jira] [Commented] (HDFS-12705) WebHdfsFileSystem exceptions should retain the caused by exception

2017-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244831#comment-16244831
 ] 

Hadoop QA commented on HDFS-12705:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 36s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
22s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:8 |
| Failed junit tests | hadoop.hdfs.TestDatanodeDeath |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData |
|   | hadoop.hdfs.TestDecommissionWithStriped |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 |
|   | hadoop.hdfs.TestHDFSFileSystemContract |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestDatanodeRegistration |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.TestDistributedFileSystem |
|   | hadoop.hdfs.TestDFSStripedInputStream |
|   | hadoop.hdfs.TestWriteReadStripedFile |
|   | hadoop.hdfs.TestSmallBlock |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce 

[jira] [Commented] (HDFS-12792) RBF: Test Router-based federation using HDFSContract

2017-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244799#comment-16244799
 ] 

Hadoop QA commented on HDFS-12792:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}142m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.fs.contract.router.TestRouterHDFSContractRootDirectory |
|   | hadoop.fs.contract.router.TestRouterHDFSContractAppend |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12792 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896710/HDFS-12615.000.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 150ccca3bf46 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cb35a59 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22013/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22013/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Updated] (HDFS-12754) Lease renewal can hit a deadlock

2017-11-08 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated HDFS-12754:
---
Attachment: HDFS-12754.004.patch

v4 patch that makes a change to how the test sets the grace period. This way we 
don't have to set the grace period for any possible new objects of lease 
renewer returned by getLeaseRenewer.

> Lease renewal can hit a deadlock 
> -
>
> Key: HDFS-12754
> URL: https://issues.apache.org/jira/browse/HDFS-12754
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.1
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: HDFS-12754.001.patch, HDFS-12754.002.patch, 
> HDFS-12754.003.patch, HDFS-12754.004.patch
>
>
> The Client and the renewer can hit a deadlock during close operation since 
> closeFile() reaches back to the DFSClient#removeFileBeingWritten. This is 
> possible if the client class close when the renewer is renewing a lease.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12758) Ozone: Correcting assertEquals argument order in test cases

2017-11-08 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12758:
--
Attachment: (was: HDFS-12758-HDFS-7204.00.patch)

> Ozone: Correcting assertEquals argument order in test cases
> ---
>
> Key: HDFS-12758
> URL: https://issues.apache.org/jira/browse/HDFS-12758
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-12758-HDFS-7240.00.patch
>
>
> In few test cases, the arguments to {{Assert.assertEquals}} is swapped. Below 
> is the list of classes and test-cases where this has to be corrected.
> {noformat}
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/ksm/TestKeySpaceManager.java
>  testChangeVolumeQuota - line: 187, 197 & 204
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/TestDistributedOzoneVolumes.java
>  testCreateVolumes - line: 91
>  testCreateVolumesWithQuota - line: 103
>  testCreateVolumesWithInvalidQuota - line: 115
>  testCreateVolumesWithInvalidUser - line: 129
>  testCreateVolumesWithOutAdminRights - line: 144
>  testCreateVolumesInLoop - line: 156
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
>  runTestPutKey - line: 239 & 246
>  runTestPutAndListKey - line: 428, 429, 451, 452, 458 & 459
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/transport/server/TestContainerServer.java
>  testClientServerWithContainerDispatcher - line: 219
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
>  verifyGetKey - line: 491
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
>  testUpdateContainer - line: 776, 778, 794, 796, 821 & 823
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
>  testGetVersion - line: 122 & 124
>  testRegister - line: 215
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/replication/TestContainerReplicationManager.java
>  testDetectSingleContainerReplica - line: 168
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/TestXceiverClientManager.java
>  testCaching - line: 82, 91, 96 & 97
>  testFreeByReference - line: 120, 130 & 137
>  testFreeByEviction - line: 165, 170, 177 & 185
> hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/ozone/TestOzoneAcls.java
>  testAclValues - line: 111, 112, 113, 116, 117, 118, 121, 122, 123, 126, 127, 
> 128, 131, 132, 133, 136, 137 & 138
> hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
>  testFileSystemInit - line: 102
>  testOzFsReadWrite - line: 123
>  testDirectory - line: 135, 138 & 139
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12758) Ozone: Correcting assertEquals argument order in test cases

2017-11-08 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12758:
--
Attachment: HDFS-12758-HDFS-7240.00.patch

> Ozone: Correcting assertEquals argument order in test cases
> ---
>
> Key: HDFS-12758
> URL: https://issues.apache.org/jira/browse/HDFS-12758
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-12758-HDFS-7204.00.patch, 
> HDFS-12758-HDFS-7240.00.patch
>
>
> In few test cases, the arguments to {{Assert.assertEquals}} is swapped. Below 
> is the list of classes and test-cases where this has to be corrected.
> {noformat}
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/ksm/TestKeySpaceManager.java
>  testChangeVolumeQuota - line: 187, 197 & 204
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/TestDistributedOzoneVolumes.java
>  testCreateVolumes - line: 91
>  testCreateVolumesWithQuota - line: 103
>  testCreateVolumesWithInvalidQuota - line: 115
>  testCreateVolumesWithInvalidUser - line: 129
>  testCreateVolumesWithOutAdminRights - line: 144
>  testCreateVolumesInLoop - line: 156
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
>  runTestPutKey - line: 239 & 246
>  runTestPutAndListKey - line: 428, 429, 451, 452, 458 & 459
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/transport/server/TestContainerServer.java
>  testClientServerWithContainerDispatcher - line: 219
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
>  verifyGetKey - line: 491
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
>  testUpdateContainer - line: 776, 778, 794, 796, 821 & 823
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
>  testGetVersion - line: 122 & 124
>  testRegister - line: 215
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/replication/TestContainerReplicationManager.java
>  testDetectSingleContainerReplica - line: 168
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/TestXceiverClientManager.java
>  testCaching - line: 82, 91, 96 & 97
>  testFreeByReference - line: 120, 130 & 137
>  testFreeByEviction - line: 165, 170, 177 & 185
> hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/ozone/TestOzoneAcls.java
>  testAclValues - line: 111, 112, 113, 116, 117, 118, 121, 122, 123, 126, 127, 
> 128, 131, 132, 133, 136, 137 & 138
> hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
>  testFileSystemInit - line: 102
>  testOzFsReadWrite - line: 123
>  testDirectory - line: 135, 138 & 139
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12512) RBF: Add WebHDFS

2017-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244785#comment-16244785
 ] 

Hadoop QA commented on HDFS-12512:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 115 new + 119 unchanged - 1 fixed = 234 total (was 120) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
27s{color} | {color:red} The patch generated 27 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:9 |
| Failed junit tests | 
hadoop.fs.contract.router.web.TestRouterWebHDFSContractConcat |
|   | hadoop.hdfs.TestTrashWithSecureEncryptionZones |
|   | hadoop.hdfs.TestEncryptionZonesWithKMS |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestErasureCodingMultipleRacks |
|   | hadoop.fs.contract.router.web.TestRouterWebHDFSContractRename |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestDecommission |
|   | hadoop.fs.contract.router.web.TestRouterWebHDFSContractOpen |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 |
|   | hadoop.hdfs.TestDFSStripedOutputStream |
|   | hadoop.fs.contract.router.web.TestRouterWebHDFSContractDelete |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.fs.contract.router.web.TestRouterWebHDFSContractSeek |
|   | hadoop.fs.contract.router.web.TestRouterWebHDFSContractAppend |
|   | hadoop.fs.contract.router.web.TestRouterWebHDFSContractRootDirectory |
|   | hadoop.hdfs.TestMaintenanceState |
|   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy |
\\
\\
|| 

[jira] [Commented] (HDFS-12758) Ozone: Correcting assertEquals argument order in test cases

2017-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244778#comment-16244778
 ] 

Hadoop QA commented on HDFS-12758:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-12758 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12758 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896729/HDFS-12758-HDFS-7204.00.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22017/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Correcting assertEquals argument order in test cases
> ---
>
> Key: HDFS-12758
> URL: https://issues.apache.org/jira/browse/HDFS-12758
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-12758-HDFS-7204.00.patch
>
>
> In few test cases, the arguments to {{Assert.assertEquals}} is swapped. Below 
> is the list of classes and test-cases where this has to be corrected.
> {noformat}
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/ksm/TestKeySpaceManager.java
>  testChangeVolumeQuota - line: 187, 197 & 204
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/TestDistributedOzoneVolumes.java
>  testCreateVolumes - line: 91
>  testCreateVolumesWithQuota - line: 103
>  testCreateVolumesWithInvalidQuota - line: 115
>  testCreateVolumesWithInvalidUser - line: 129
>  testCreateVolumesWithOutAdminRights - line: 144
>  testCreateVolumesInLoop - line: 156
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
>  runTestPutKey - line: 239 & 246
>  runTestPutAndListKey - line: 428, 429, 451, 452, 458 & 459
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/transport/server/TestContainerServer.java
>  testClientServerWithContainerDispatcher - line: 219
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
>  verifyGetKey - line: 491
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
>  testUpdateContainer - line: 776, 778, 794, 796, 821 & 823
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
>  testGetVersion - line: 122 & 124
>  testRegister - line: 215
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/replication/TestContainerReplicationManager.java
>  testDetectSingleContainerReplica - line: 168
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/TestXceiverClientManager.java
>  testCaching - line: 82, 91, 96 & 97
>  testFreeByReference - line: 120, 130 & 137
>  testFreeByEviction - line: 165, 170, 177 & 185
> hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/ozone/TestOzoneAcls.java
>  testAclValues - line: 111, 112, 113, 116, 117, 118, 121, 122, 123, 126, 127, 
> 128, 131, 132, 133, 136, 137 & 138
> hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
>  testFileSystemInit - line: 102
>  testOzFsReadWrite - line: 123
>  testDirectory - line: 135, 138 & 139
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12740) SCM should support a RPC to share the cluster Id with KSM and DataNodes

2017-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244774#comment-16244774
 ] 

Hadoop QA commented on HDFS-12740:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 17m  
9s{color} | {color:red} root in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-hdfs-project in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdfs-client in HDFS-7240 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 45s{color} 
| {color:red} hadoop-hdfs-project generated 429 new + 0 unchanged - 0 fixed = 
429 total (was 0) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 148 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
2s{color} | {color:red} The patch 1800 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
56s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 21s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 20s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue}  0m 
22s{color} | {color:blue} ASF License check generated no output? {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce 

[jira] [Updated] (HDFS-12758) Ozone: Correcting assertEquals argument order in test cases

2017-11-08 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12758:
--
Attachment: HDFS-12758-HDFS-7204.00.patch

> Ozone: Correcting assertEquals argument order in test cases
> ---
>
> Key: HDFS-12758
> URL: https://issues.apache.org/jira/browse/HDFS-12758
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-12758-HDFS-7204.00.patch
>
>
> In few test cases, the arguments to {{Assert.assertEquals}} is swapped. Below 
> is the list of classes and test-cases where this has to be corrected.
> {noformat}
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/ksm/TestKeySpaceManager.java
>  testChangeVolumeQuota - line: 187, 197 & 204
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/TestDistributedOzoneVolumes.java
>  testCreateVolumes - line: 91
>  testCreateVolumesWithQuota - line: 103
>  testCreateVolumesWithInvalidQuota - line: 115
>  testCreateVolumesWithInvalidUser - line: 129
>  testCreateVolumesWithOutAdminRights - line: 144
>  testCreateVolumesInLoop - line: 156
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
>  runTestPutKey - line: 239 & 246
>  runTestPutAndListKey - line: 428, 429, 451, 452, 458 & 459
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/transport/server/TestContainerServer.java
>  testClientServerWithContainerDispatcher - line: 219
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
>  verifyGetKey - line: 491
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
>  testUpdateContainer - line: 776, 778, 794, 796, 821 & 823
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
>  testGetVersion - line: 122 & 124
>  testRegister - line: 215
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/replication/TestContainerReplicationManager.java
>  testDetectSingleContainerReplica - line: 168
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/TestXceiverClientManager.java
>  testCaching - line: 82, 91, 96 & 97
>  testFreeByReference - line: 120, 130 & 137
>  testFreeByEviction - line: 165, 170, 177 & 185
> hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/ozone/TestOzoneAcls.java
>  testAclValues - line: 111, 112, 113, 116, 117, 118, 121, 122, 123, 126, 127, 
> 128, 131, 132, 133, 136, 137 & 138
> hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
>  testFileSystemInit - line: 102
>  testOzFsReadWrite - line: 123
>  testDirectory - line: 135, 138 & 139
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12758) Ozone: Correcting assertEquals argument order in test cases

2017-11-08 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12758:
--
Status: Patch Available  (was: In Progress)

> Ozone: Correcting assertEquals argument order in test cases
> ---
>
> Key: HDFS-12758
> URL: https://issues.apache.org/jira/browse/HDFS-12758
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-12758-HDFS-7204.00.patch
>
>
> In few test cases, the arguments to {{Assert.assertEquals}} is swapped. Below 
> is the list of classes and test-cases where this has to be corrected.
> {noformat}
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/ksm/TestKeySpaceManager.java
>  testChangeVolumeQuota - line: 187, 197 & 204
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/TestDistributedOzoneVolumes.java
>  testCreateVolumes - line: 91
>  testCreateVolumesWithQuota - line: 103
>  testCreateVolumesWithInvalidQuota - line: 115
>  testCreateVolumesWithInvalidUser - line: 129
>  testCreateVolumesWithOutAdminRights - line: 144
>  testCreateVolumesInLoop - line: 156
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
>  runTestPutKey - line: 239 & 246
>  runTestPutAndListKey - line: 428, 429, 451, 452, 458 & 459
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/transport/server/TestContainerServer.java
>  testClientServerWithContainerDispatcher - line: 219
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
>  verifyGetKey - line: 491
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
>  testUpdateContainer - line: 776, 778, 794, 796, 821 & 823
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
>  testGetVersion - line: 122 & 124
>  testRegister - line: 215
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/replication/TestContainerReplicationManager.java
>  testDetectSingleContainerReplica - line: 168
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/TestXceiverClientManager.java
>  testCaching - line: 82, 91, 96 & 97
>  testFreeByReference - line: 120, 130 & 137
>  testFreeByEviction - line: 165, 170, 177 & 185
> hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/ozone/TestOzoneAcls.java
>  testAclValues - line: 111, 112, 113, 116, 117, 118, 121, 122, 123, 126, 127, 
> 128, 131, 132, 133, 136, 137 & 138
> hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
>  testFileSystemInit - line: 102
>  testOzFsReadWrite - line: 123
>  testDirectory - line: 135, 138 & 139
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-12758) Ozone: Correcting assertEquals argument order in test cases

2017-11-08 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-12758 started by Bharat Viswanadham.
-
> Ozone: Correcting assertEquals argument order in test cases
> ---
>
> Key: HDFS-12758
> URL: https://issues.apache.org/jira/browse/HDFS-12758
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: newbie
>
> In few test cases, the arguments to {{Assert.assertEquals}} is swapped. Below 
> is the list of classes and test-cases where this has to be corrected.
> {noformat}
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/ksm/TestKeySpaceManager.java
>  testChangeVolumeQuota - line: 187, 197 & 204
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/TestDistributedOzoneVolumes.java
>  testCreateVolumes - line: 91
>  testCreateVolumesWithQuota - line: 103
>  testCreateVolumesWithInvalidQuota - line: 115
>  testCreateVolumesWithInvalidUser - line: 129
>  testCreateVolumesWithOutAdminRights - line: 144
>  testCreateVolumesInLoop - line: 156
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
>  runTestPutKey - line: 239 & 246
>  runTestPutAndListKey - line: 428, 429, 451, 452, 458 & 459
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/transport/server/TestContainerServer.java
>  testClientServerWithContainerDispatcher - line: 219
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/ContainerTestHelper.java
>  verifyGetKey - line: 491
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
>  testUpdateContainer - line: 776, 778, 794, 796, 821 & 823
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/common/TestEndPoint.java
>  testGetVersion - line: 122 & 124
>  testRegister - line: 215
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/container/replication/TestContainerReplicationManager.java
>  testDetectSingleContainerReplica - line: 168
> hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/ozone/scm/TestXceiverClientManager.java
>  testCaching - line: 82, 91, 96 & 97
>  testFreeByReference - line: 120, 130 & 137
>  testFreeByEviction - line: 165, 170, 177 & 185
> hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/ozone/TestOzoneAcls.java
>  testAclValues - line: 111, 112, 113, 116, 117, 118, 121, 122, 123, 126, 127, 
> 128, 131, 132, 133, 136, 137 & 138
> hadoop-tools/hadoop-ozone/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFileInterfaces.java
>  testFileSystemInit - line: 102
>  testOzFsReadWrite - line: 123
>  testDirectory - line: 135, 138 & 139
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12781) After Datanode down, In Namenode UI Datanode tab is throwing warning message.

2017-11-08 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244743#comment-16244743
 ] 

Ravi Prakash commented on HDFS-12781:
-

Hi Brahma! Is the JMX JSON returned from the URL 
"jmx?qry=Hadoop:service=NameNode,name=NameNodeInfo" malformed? What fields are 
missing? It seems the {{workaround()}} function is trying to get the LiveNodes 
and return them as an array that DataTables.js can consume. So if the node has 
been dead long enough, it should *not* be in the LiveNodes list. If it is only 
recently dead, which coloumn of data doesn't exist?

> After Datanode down, In Namenode UI Datanode tab is throwing warning message.
> -
>
> Key: HDFS-12781
> URL: https://issues.apache.org/jira/browse/HDFS-12781
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Harshakiran Reddy
>Assignee: Brahma Reddy Battula
> Attachments: HDFS-12781-001.patch
>
>
> Scenario:
> Stop one Datanode
> Refresh or click on the Datanode tab in namenode UI.
> Actual Output:
> ==
> it's throwing the warning message. please find the bellow warning message.
> DataTables warning: table id=table-datanodes - Requested unknown parameter 
> '7' for row 2. For more information about this error, please see 
> http://datatables.net/tn/4
> Expected Output:
> 
> whenever you click on Datanode tab,it should be display the datanodes 
> information.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-8198) Erasure Coding: system test of TeraSort

2017-11-08 Thread Daniel Pol (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16238998#comment-16238998
 ] 

Daniel Pol edited comment on HDFS-8198 at 11/8/17 9:14 PM:
---

Terasort doesn't seem to work on my system with EC in beta1. Here's a small 
script to reproduce the issue:

sudo -u hdfs bin/hdfs dfs -rm -r -skipTrash /ectest
sudo -u hdfs bin/hdfs dfs -mkdir /ectest
#sudo -u hdfs bin/hdfs ec -setPolicy -path /ectest -policy RS-3-2-1024k
sleep 5
sudo -u hdfs bin/yarn jar  
/ec/hadoop-3.0.0-beta1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-beta1.jar
 teragen 1 /ectest/Input
sleep 30
sudo -u hdfs bin/yarn jar  
/ec/hadoop-3.0.0-beta1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-beta1.jar
 teravalidate /ectest/Input /ectest/Validate
sleep 30
sudo -u hdfs bin/yarn jar  
/ec/hadoop-3.0.0-beta1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-beta1.jar
 terasort /ectest/Input /ectest/Output

It works fine like this (with the set EC policy commented out) but it fails 
when you uncomment the set policy line. Interestingly enough the it fails only 
at Terasort step when reading the input files, but Teravalidate that runs 
before it reads the same files and it doesn't fail. Fsck shows everything find 
and checking the nodes individually, all the files are there. I've tried all 
default codecs and policies (native and java), they all give me the same error. 
Missing blocks. Error shows up only when the amount of data becomes big enough, 
so make sure you use the number of records I have in my script or higher.

Seems to happen only when input split size goes over 1850MB. Very clear at 2GB 
or more.


was (Author: danielpol):
Terasort doesn't seem to work on my system with EC in beta1. Here's a small 
script to reproduce the issue:

sudo -u hdfs bin/hdfs dfs -rm -r -skipTrash /ectest
sudo -u hdfs bin/hdfs dfs -mkdir /ectest
#sudo -u hdfs bin/hdfs ec -setPolicy -path /ectest -policy RS-3-2-1024k
sleep 5
sudo -u hdfs bin/yarn jar  
/ec/hadoop-3.0.0-beta1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-beta1.jar
 teragen 1 /ectest/Input
sleep 30
sudo -u hdfs bin/yarn jar  
/ec/hadoop-3.0.0-beta1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-beta1.jar
 teravalidate /ectest/Input /ectest/Validate
sleep 30
sudo -u hdfs bin/yarn jar  
/ec/hadoop-3.0.0-beta1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-beta1.jar
 terasort /ectest/Input /ectest/Output

It works fine like this (with the set EC policy commented out) but it fails 
when you uncomment the set policy line. Interestingly enough the it fails only 
at Terasort step when reading the input files, but Teravalidate that runs 
before it reads the same files and it doesn't fail. Fsck shows everything find 
and checking the nodes individually, all the files are there. I've tried all 
default codecs and policies (native and java), they all give me the same error. 
Missing blocks. Error shows up only when the amount of data becomes big enough, 
so make sure you use the number of records I have in my script or higher.


> Erasure Coding: system test of TeraSort
> ---
>
> Key: HDFS-8198
> URL: https://issues.apache.org/jira/browse/HDFS-8198
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>
> Functional system test of TeraSort on EC files.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12615) Router-based HDFS federation phase 2

2017-11-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12615:
---
Description: This umbrella JIRA tracks set of improvements over the 
Router-based HDFS federation (HDFS-10467).  (was: This umbrella JIRA tracks set 
of improvements over the Router-based HDFS federatio (HDFS-10467).)

> Router-based HDFS federation phase 2
> 
>
> Key: HDFS-12615
> URL: https://issues.apache.org/jira/browse/HDFS-12615
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>  Labels: RBF
>
> This umbrella JIRA tracks set of improvements over the Router-based HDFS 
> federation (HDFS-10467).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12740) SCM should support a RPC to share the cluster Id with KSM and DataNodes

2017-11-08 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-12740:
---
Attachment: HDFS-12740-HDFS-7240.001.patch

Removed the older patch. Patch v1 implements the changes as per HDFS-12739.
[~nandakumar131], please have a look.

> SCM should support a RPC to share the cluster Id with KSM and DataNodes
> ---
>
> Key: HDFS-12740
> URL: https://issues.apache.org/jira/browse/HDFS-12740
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12740-HDFS-7240.001.patch
>
>
> When the ozone cluster is first Created, SCM --init command will generate 
> cluster Id as well as SCM Id and persist it locally. The same cluster Id and 
> the SCM id will be shared with KSM during the KSM initialization and 
> Datanodes during datanode registration. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12740) SCM should support a RPC to share the cluster Id with KSM and DataNodes

2017-11-08 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-12740:
---
Status: Patch Available  (was: Open)

> SCM should support a RPC to share the cluster Id with KSM and DataNodes
> ---
>
> Key: HDFS-12740
> URL: https://issues.apache.org/jira/browse/HDFS-12740
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
> Attachments: HDFS-12740-HDFS-7240.001.patch
>
>
> When the ozone cluster is first Created, SCM --init command will generate 
> cluster Id as well as SCM Id and persist it locally. The same cluster Id and 
> the SCM id will be shared with KSM during the KSM initialization and 
> Datanodes during datanode registration. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12740) SCM should support a RPC to share the cluster Id with KSM and DataNodes

2017-11-08 Thread Shashikant Banerjee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-12740:
---
Attachment: (was: HDFS-12740-HDFS-7240.001.patch)

> SCM should support a RPC to share the cluster Id with KSM and DataNodes
> ---
>
> Key: HDFS-12740
> URL: https://issues.apache.org/jira/browse/HDFS-12740
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Fix For: HDFS-7240
>
>
> When the ozone cluster is first Created, SCM --init command will generate 
> cluster Id as well as SCM Id and persist it locally. The same cluster Id and 
> the SCM id will be shared with KSM during the KSM initialization and 
> Datanodes during datanode registration. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12756) Ozone: Add datanodeID to heartbeat responses and container protocol

2017-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244674#comment-16244674
 ] 

Hadoop QA commented on HDFS-12756:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 45 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
6s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
19s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 14s{color} | {color:orange} root: The patch generated 2 new + 7 unchanged - 
0 fixed = 9 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
42s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-ozone in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
34s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}183m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:4 |
| Failed junit tests | hadoop.hdfs.TestDatanodeDeath |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
|   | 

[jira] [Commented] (HDFS-12754) Lease renewal can hit a deadlock

2017-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244648#comment-16244648
 ] 

Hadoop QA commented on HDFS-12754:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 50s{color} 
| {color:red} hadoop-hdfs-project generated 1 new + 426 unchanged - 0 fixed = 
427 total (was 426) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
26s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.metrics.TestFederationMetrics |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.TestErasureCodingMultipleRacks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12754 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896683/HDFS-12754.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1d8048b9bd11 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDFS-12705) WebHdfsFileSystem exceptions should retain the caused by exception

2017-11-08 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244613#comment-16244613
 ] 

Hanisha Koneru commented on HDFS-12705:
---

Thanks for the review, [~nandakumar131].
Addressed yours comments in patch v03.
Test failures are unrelated and pass locally for me.

> WebHdfsFileSystem exceptions should retain the caused by exception
> --
>
> Key: HDFS-12705
> URL: https://issues.apache.org/jira/browse/HDFS-12705
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Assignee: Hanisha Koneru
> Attachments: HDFS-12705.001.patch, HDFS-12705.002.patch, 
> HDFS-12705.003.patch
>
>
> {{WebHdfsFileSystem#runWithRetry}} uses reflection to prepend the remote host 
> to the exception.  While it preserves the original stacktrace, it omits the 
> original cause which complicates debugging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12705) WebHdfsFileSystem exceptions should retain the caused by exception

2017-11-08 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12705:
--
Attachment: HDFS-12705.003.patch

> WebHdfsFileSystem exceptions should retain the caused by exception
> --
>
> Key: HDFS-12705
> URL: https://issues.apache.org/jira/browse/HDFS-12705
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Assignee: Hanisha Koneru
> Attachments: HDFS-12705.001.patch, HDFS-12705.002.patch, 
> HDFS-12705.003.patch
>
>
> {{WebHdfsFileSystem#runWithRetry}} uses reflection to prepend the remote host 
> to the exception.  While it preserves the original stacktrace, it omits the 
> original cause which complicates debugging.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12735) Make ContainerStateMachine#applyTransaction async

2017-11-08 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244617#comment-16244617
 ] 

Jitendra Nath Pandey commented on HDFS-12735:
-

bq. The ratis StateMachine#applyTransaction does not guarantee that the calls 
will be ordered according to the log commit order and this function is the one 
which is implemented in this patch(Source: StateMachine.java file in ratis)

It is a little more complicated than I initially thought. Even though, 
operations like create and delete will be ordered by the client, the client 
would like to write a single large object in a pipeline asynchronously. Since, 
ozone objects are append only, the container state machine will need to order 
the writes for individual objects. The writes for different objects can be in 
parallel. 

 [~szetszwo] has provided a sophisticated task queue in RATIS-122, that tries 
to achieve similar. The additional complexity is that, a task queue is needed 
per object being written in the state machine.

> Make ContainerStateMachine#applyTransaction async
> -
>
> Key: HDFS-12735
> URL: https://issues.apache.org/jira/browse/HDFS-12735
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>  Labels: performance
> Attachments: HDFS-12735-HDFS-7240.000.patch, 
> HDFS-12735-HDFS-7240.001.patch
>
>
> Currently ContainerStateMachine#applyTransaction makes a synchronous call to 
> dispatch client requests. Idea is to have a thread pool which dispatches 
> client requests and returns a CompletableFuture.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12594) SnapshotDiff - snapshotDiff fails if the snapshotDiff report exceeds the RPC response limit

2017-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244611#comment-16244611
 ] 

Hadoop QA commented on HDFS-12594:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 53s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 
906 unchanged - 0 fixed = 910 total (was 906) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
24s{color} | {color:red} The patch generated 11 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-hdfs:3 |
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithDNFailure |
|   | hadoop.hdfs.server.datanode.TestBpServiceActorScheduler |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestFSOutputSummer |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|  

[jira] [Updated] (HDFS-12739) Add Support for SCM --init command

2017-11-08 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12739:
---
Component/s: (was: HDFS-7240)
 ozone

> Add Support for SCM --init command
> --
>
> Key: HDFS-12739
> URL: https://issues.apache.org/jira/browse/HDFS-12739
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Attachments: HDFS-12739-HDFS-7240.001.patch, 
> HDFS-12739-HDFS-7240.002.patch, HDFS-12739-HDFS-7240.003.patch, 
> HDFS-12739-HDFS-7240.004.patch, HDFS-12739-HDFS-7240.005.patch, 
> HDFS-12739-HDFS-7240.006.patch, HDFS-12739-HDFS-7240.007.patch, 
> HDFS-12739-HDFS-7240.008.patch, HDFS-12739-HDFS-7240.009.patch
>
>
> SCM --init command will generate cluster ID and persist it locally. The same 
> cluster Id will be shared with KSM and the datanodes. IF the cluster Id is 
> already available in the locally available version file, it will just read 
> the cluster Id .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12739) Add Support for SCM --init command

2017-11-08 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12739:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
Target Version/s: HDFS-7240
  Status: Resolved  (was: Patch Available)

> Add Support for SCM --init command
> --
>
> Key: HDFS-12739
> URL: https://issues.apache.org/jira/browse/HDFS-12739
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Attachments: HDFS-12739-HDFS-7240.001.patch, 
> HDFS-12739-HDFS-7240.002.patch, HDFS-12739-HDFS-7240.003.patch, 
> HDFS-12739-HDFS-7240.004.patch, HDFS-12739-HDFS-7240.005.patch, 
> HDFS-12739-HDFS-7240.006.patch, HDFS-12739-HDFS-7240.007.patch, 
> HDFS-12739-HDFS-7240.008.patch, HDFS-12739-HDFS-7240.009.patch
>
>
> SCM --init command will generate cluster ID and persist it locally. The same 
> cluster Id will be shared with KSM and the datanodes. IF the cluster Id is 
> already available in the locally available version file, it will just read 
> the cluster Id .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12739) Add Support for SCM --init command

2017-11-08 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244600#comment-16244600
 ] 

Nanda kumar commented on HDFS-12739:


I have committed this to the feature branch.
Thanks for the contribution [~shashikant] and thanks [~msingh] for the review.

> Add Support for SCM --init command
> --
>
> Key: HDFS-12739
> URL: https://issues.apache.org/jira/browse/HDFS-12739
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Attachments: HDFS-12739-HDFS-7240.001.patch, 
> HDFS-12739-HDFS-7240.002.patch, HDFS-12739-HDFS-7240.003.patch, 
> HDFS-12739-HDFS-7240.004.patch, HDFS-12739-HDFS-7240.005.patch, 
> HDFS-12739-HDFS-7240.006.patch, HDFS-12739-HDFS-7240.007.patch, 
> HDFS-12739-HDFS-7240.008.patch, HDFS-12739-HDFS-7240.009.patch
>
>
> SCM --init command will generate cluster ID and persist it locally. The same 
> cluster Id will be shared with KSM and the datanodes. IF the cluster Id is 
> already available in the locally available version file, it will just read 
> the cluster Id .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12739) Add Support for SCM --init command

2017-11-08 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244570#comment-16244570
 ] 

Nanda kumar edited comment on HDFS-12739 at 11/8/17 7:19 PM:
-

+1 on patch v009, looks good to me. I will commit this shortly.


was (Author: nandakumar131):
+1, looks good to me. I will commit this shortly.

> Add Support for SCM --init command
> --
>
> Key: HDFS-12739
> URL: https://issues.apache.org/jira/browse/HDFS-12739
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Attachments: HDFS-12739-HDFS-7240.001.patch, 
> HDFS-12739-HDFS-7240.002.patch, HDFS-12739-HDFS-7240.003.patch, 
> HDFS-12739-HDFS-7240.004.patch, HDFS-12739-HDFS-7240.005.patch, 
> HDFS-12739-HDFS-7240.006.patch, HDFS-12739-HDFS-7240.007.patch, 
> HDFS-12739-HDFS-7240.008.patch, HDFS-12739-HDFS-7240.009.patch
>
>
> SCM --init command will generate cluster ID and persist it locally. The same 
> cluster Id will be shared with KSM and the datanodes. IF the cluster Id is 
> already available in the locally available version file, it will just read 
> the cluster Id .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12739) Add Support for SCM --init command

2017-11-08 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244570#comment-16244570
 ] 

Nanda kumar edited comment on HDFS-12739 at 11/8/17 7:18 PM:
-

+1, looks good to me. I will commit this shortly.


was (Author: nandakumar131):
+1, looks good to me.
I will commit this shortly, will take care of the checkstyle issue while 
committing.

> Add Support for SCM --init command
> --
>
> Key: HDFS-12739
> URL: https://issues.apache.org/jira/browse/HDFS-12739
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Attachments: HDFS-12739-HDFS-7240.001.patch, 
> HDFS-12739-HDFS-7240.002.patch, HDFS-12739-HDFS-7240.003.patch, 
> HDFS-12739-HDFS-7240.004.patch, HDFS-12739-HDFS-7240.005.patch, 
> HDFS-12739-HDFS-7240.006.patch, HDFS-12739-HDFS-7240.007.patch, 
> HDFS-12739-HDFS-7240.008.patch, HDFS-12739-HDFS-7240.009.patch
>
>
> SCM --init command will generate cluster ID and persist it locally. The same 
> cluster Id will be shared with KSM and the datanodes. IF the cluster Id is 
> already available in the locally available version file, it will just read 
> the cluster Id .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12792) RBF: Test Router-based federation using HDFSContract

2017-11-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244573#comment-16244573
 ] 

Íñigo Goiri commented on HDFS-12792:


When open sourcing our WebHDFS for RBF in HDFS-12512, I realized we internally 
had tests for the HDFSContract which weren't in OSS.
Pushing here.

> RBF: Test Router-based federation using HDFSContract
> 
>
> Key: HDFS-12792
> URL: https://issues.apache.org/jira/browse/HDFS-12792
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HDFS-12615.000.patch
>
>
> Router-based federation should support HDFSContract.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12739) Add Support for SCM --init command

2017-11-08 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244570#comment-16244570
 ] 

Nanda kumar commented on HDFS-12739:


+1, looks good to me.
I will commit this shortly, will take care of the checkstyle issue while 
committing.

> Add Support for SCM --init command
> --
>
> Key: HDFS-12739
> URL: https://issues.apache.org/jira/browse/HDFS-12739
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: HDFS-7240
>Affects Versions: HDFS-7240
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
> Attachments: HDFS-12739-HDFS-7240.001.patch, 
> HDFS-12739-HDFS-7240.002.patch, HDFS-12739-HDFS-7240.003.patch, 
> HDFS-12739-HDFS-7240.004.patch, HDFS-12739-HDFS-7240.005.patch, 
> HDFS-12739-HDFS-7240.006.patch, HDFS-12739-HDFS-7240.007.patch, 
> HDFS-12739-HDFS-7240.008.patch, HDFS-12739-HDFS-7240.009.patch
>
>
> SCM --init command will generate cluster ID and persist it locally. The same 
> cluster Id will be shared with KSM and the datanodes. IF the cluster Id is 
> already available in the locally available version file, it will just read 
> the cluster Id .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12777) [READ] Reduce memory and CPU footprint for PROVIDED volumes.

2017-11-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12777:
--
Status: Open  (was: Patch Available)

> [READ] Reduce memory and CPU footprint for PROVIDED volumes.
> 
>
> Key: HDFS-12777
> URL: https://issues.apache.org/jira/browse/HDFS-12777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12777-HDFS-9806.001.patch, 
> HDFS-12777-HDFS-9806.002.patch
>
>
> As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
> storage. This can be millions of blocks for 100s of TBs of PROVIDED data. 
> Storing the data for these blocks can lead to a large memory footprint. 
> Further, with so many blocks, {{DirectoryScanner}} running on a PROVIDED 
> volume can increase the memory and CPU utilization. 
> To reduce these overheads, this JIRA aims to (a) disable the 
> {{DirectoryScanner}} on PROVIDED volumes (as HDFS-9806 focuses on only 
> read-only data in PROVIDED volumes), (b) reduce the space occupied by 
> {{FinalizedProvidedReplicaInfo}} by using a common URI prefix across all 
> PROVIDED blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12512) RBF: Add WebHDFS

2017-11-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12512:
---
Status: Patch Available  (was: Open)

> RBF: Add WebHDFS
> 
>
> Key: HDFS-12512
> URL: https://issues.apache.org/jira/browse/HDFS-12512
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Wei Yan
>  Labels: RBF
> Attachments: HDFS-12512.000.patch
>
>
> The Router currently does not support WebHDFS. It needs to implement 
> something similar to {{NamenodeWebHdfsMethods}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12792) RBF: Test Router-based federation using HDFSContract

2017-11-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12792:
---
Description: Router-based federation should support HDFSContract.

> RBF: Test Router-based federation using HDFSContract
> 
>
> Key: HDFS-12792
> URL: https://issues.apache.org/jira/browse/HDFS-12792
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HDFS-12615.000.patch
>
>
> Router-based federation should support HDFSContract.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12792) RBF: Test Router-based federation using HDFSContract

2017-11-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12792:
---
Attachment: HDFS-12615.000.patch

> RBF: Test Router-based federation using HDFSContract
> 
>
> Key: HDFS-12792
> URL: https://issues.apache.org/jira/browse/HDFS-12792
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
> Attachments: HDFS-12615.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12792) RBF: Test Router-based federation using HDFSContract

2017-11-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12792:
---
Assignee: Íñigo Goiri
  Status: Patch Available  (was: Open)

> RBF: Test Router-based federation using HDFSContract
> 
>
> Key: HDFS-12792
> URL: https://issues.apache.org/jira/browse/HDFS-12792
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Attachments: HDFS-12615.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12792) RBF: Test Router-based federation using HDFSContract

2017-11-08 Thread JIRA
Íñigo Goiri created HDFS-12792:
--

 Summary: RBF: Test Router-based federation using HDFSContract
 Key: HDFS-12792
 URL: https://issues.apache.org/jira/browse/HDFS-12792
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Íñigo Goiri






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12756) Ozone: Add datanodeID to heartbeat responses and container protocol

2017-11-08 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244564#comment-16244564
 ] 

Nanda kumar commented on HDFS-12756:


Thanks [~anu] for updating the patch.
LGTM, +1 pending jenkins.

> Ozone: Add datanodeID to heartbeat responses and container protocol
> ---
>
> Key: HDFS-12756
> URL: https://issues.apache.org/jira/browse/HDFS-12756
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: HDFS-12756-HDFS-7240.001.patch, 
> HDFS-12756-HDFS-7240.002.patch, HDFS-12756-HDFS-7240.003.patch, 
> HDFS-12756-HDFS-7240.004.patch
>
>
> if we have datanode ID in the HBs responses and commands send to datanode, we 
> will be able to do additional sanity checking on datanode before executing 
> the command. This is also very helpful in creating a MiniOzoneCluster with 
> 1000s of simulated nodes. This is needed for scale based unit tests of SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12512) RBF: Add WebHDFS

2017-11-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12512:
---
Attachment: HDFS-12512.000.patch

> RBF: Add WebHDFS
> 
>
> Key: HDFS-12512
> URL: https://issues.apache.org/jira/browse/HDFS-12512
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Wei Yan
>  Labels: RBF
> Attachments: HDFS-12512.000.patch
>
>
> The Router currently does not support WebHDFS. It needs to implement 
> something similar to {{NamenodeWebHdfsMethods}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12777) [READ] Reduce memory and CPU footprint for PROVIDED volumes.

2017-11-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12777:
--
Attachment: HDFS-12777-HDFS-9806.002.patch

Patch fixes the unused import error. [~elgoiri], can you take a look at this 
patch?

> [READ] Reduce memory and CPU footprint for PROVIDED volumes.
> 
>
> Key: HDFS-12777
> URL: https://issues.apache.org/jira/browse/HDFS-12777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12777-HDFS-9806.001.patch, 
> HDFS-12777-HDFS-9806.002.patch
>
>
> As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
> storage. This can be millions of blocks for 100s of TBs of PROVIDED data. 
> Storing the data for these blocks can lead to a large memory footprint. 
> Further, with so many blocks, {{DirectoryScanner}} running on a PROVIDED 
> volume can increase the memory and CPU utilization. 
> To reduce these overheads, this JIRA aims to (a) disable the 
> {{DirectoryScanner}} on PROVIDED volumes (as HDFS-9806 focuses on only 
> read-only data in PROVIDED volumes), (b) reduce the space occupied by 
> {{FinalizedProvidedReplicaInfo}} by using a common URI prefix across all 
> PROVIDED blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12777) [READ] Reduce memory and CPU footprint for PROVIDED volumes.

2017-11-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12777:
--
Status: Patch Available  (was: Open)

> [READ] Reduce memory and CPU footprint for PROVIDED volumes.
> 
>
> Key: HDFS-12777
> URL: https://issues.apache.org/jira/browse/HDFS-12777
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12777-HDFS-9806.001.patch, 
> HDFS-12777-HDFS-9806.002.patch
>
>
> As opposed to local blocks, each DN keeps track of all blocks in PROVIDED 
> storage. This can be millions of blocks for 100s of TBs of PROVIDED data. 
> Storing the data for these blocks can lead to a large memory footprint. 
> Further, with so many blocks, {{DirectoryScanner}} running on a PROVIDED 
> volume can increase the memory and CPU utilization. 
> To reduce these overheads, this JIRA aims to (a) disable the 
> {{DirectoryScanner}} on PROVIDED volumes (as HDFS-9806 focuses on only 
> read-only data in PROVIDED volumes), (b) reduce the space occupied by 
> {{FinalizedProvidedReplicaInfo}} by using a common URI prefix across all 
> PROVIDED blocks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12789) [READ] Image generation tool does not close an opened stream

2017-11-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12789:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> [READ] Image generation tool does not close an opened stream
> 
>
> Key: HDFS-12789
> URL: https://issues.apache.org/jira/browse/HDFS-12789
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12789-HDFS-9806.001.patch, 
> HDFS-12789-HDFS-9806.002.patch
>
>
> Other JIRAs (e.g., HDFS-12671), generate a FindBug issues:
> {code}
> Bug type OBL_UNSATISFIED_OBLIGATION_EXCEPTION_EDGE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.ImageWriter
> In method new 
> org.apache.hadoop.hdfs.server.namenode.ImageWriter(ImageWriter$Options)
> Reference type java.io.OutputStream
> 1 instances of obligation remaining
> Obligation to clean up resource created at ImageWriter.java:[line 170] is not 
> discharged
> Remaining obligations: {OutputStream x 1}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11640) [READ] Datanodes should use a unique identifier when reading from external stores

2017-11-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11640:
--
Status: Open  (was: Patch Available)

> [READ] Datanodes should use a unique identifier when reading from external 
> stores
> -
>
> Key: HDFS-11640
> URL: https://issues.apache.org/jira/browse/HDFS-11640
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-11640-HDFS-9806.001.patch, 
> HDFS-11640-HDFS-9806.002.patch
>
>
> Use a unique identifier when reading from external stores to ensure that 
> datanodes read the correct (version of) file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12512) RBF: Add WebHDFS

2017-11-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244550#comment-16244550
 ] 

Íñigo Goiri commented on HDFS-12512:


Added an approach that more or less works but it's not very clean as it repeats 
a lot from the Namenode.
[~ywskycn] feel free to use [^HDFS-12512.000.patch] as a base or start from 
scratch.

> RBF: Add WebHDFS
> 
>
> Key: HDFS-12512
> URL: https://issues.apache.org/jira/browse/HDFS-12512
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Wei Yan
>  Labels: RBF
> Attachments: HDFS-12512.000.patch
>
>
> The Router currently does not support WebHDFS. It needs to implement 
> something similar to {{NamenodeWebHdfsMethods}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11640) [READ] Datanodes should use a unique identifier when reading from external stores

2017-11-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-11640:
--
Status: Patch Available  (was: Open)

> [READ] Datanodes should use a unique identifier when reading from external 
> stores
> -
>
> Key: HDFS-11640
> URL: https://issues.apache.org/jira/browse/HDFS-11640
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-11640-HDFS-9806.001.patch, 
> HDFS-11640-HDFS-9806.002.patch
>
>
> Use a unique identifier when reading from external stores to ensure that 
> datanodes read the correct (version of) file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12776) [READ] Increasing replication for PROVIDED files should create local replicas

2017-11-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12776:
--
Status: Patch Available  (was: Open)

> [READ] Increasing replication for PROVIDED files should create local replicas
> -
>
> Key: HDFS-12776
> URL: https://issues.apache.org/jira/browse/HDFS-12776
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12776-HDFS-9806.001.patch
>
>
> For PROVIDED files, set replication only works when the target datanode does 
> not have a PROVIDED volume. In a cluster, where all Datanodes have PROVIDED 
> volumes, set replication does not work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12779) [READ] Allow cluster id to be specified to the Image generation tool

2017-11-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12779:
--
Status: Patch Available  (was: Open)

> [READ] Allow cluster id to be specified to the Image generation tool
> 
>
> Key: HDFS-12779
> URL: https://issues.apache.org/jira/browse/HDFS-12779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Trivial
> Attachments: HDFS-12779-HDFS-9806.001.patch
>
>
> Setting the cluster id for the FSImage generated for PROVIDED files is 
> required when the Namenode for PROVIDED files is expected to run in 
> federation with other Namenodes that manage local storage/data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12776) [READ] Increasing replication for PROVIDED files should create local replicas

2017-11-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12776:
--
Status: Open  (was: Patch Available)

> [READ] Increasing replication for PROVIDED files should create local replicas
> -
>
> Key: HDFS-12776
> URL: https://issues.apache.org/jira/browse/HDFS-12776
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12776-HDFS-9806.001.patch
>
>
> For PROVIDED files, set replication only works when the target datanode does 
> not have a PROVIDED volume. In a cluster, where all Datanodes have PROVIDED 
> volumes, set replication does not work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12549) Ozone: OzoneClient: Support for REST protocol

2017-11-08 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244543#comment-16244543
 ] 

Nanda kumar commented on HDFS-12549:


[~xyao], review comments are addressed in patch v004.

bq. have you consider using the new JDK8 DateTimeFormatter/ZonedDateTime for 
the handling of ozone creation/modification timestamp?
Thanks for the suggestion, I have modified it accordingly.

bq. Please add some comments to it and the default values for 
OZONE_REST_CLIENT_HTTP_CONNECTION_MAX and maybe some document also in 
ozone-default.xml
Done

bq. there are other configurations of PoolingHttpClientConnectionManager that 
we might want to expose via OzoneConfigKeys in addition to the MaxTotal, e.g., 
max per route
Apart from MaxTotal also added DefaultMaxPerRoute, please let me know if 
anything else needs to be added.

bq. we don't need to instantiate a new data formatter here. The hard coded 
format string can be replaced by OzoneConsts.OZONE_DATE_FORMAT.
Fixed

bq. should use the getShortUserName()
Fixed

bq. executeHttpRequest does not close the response, which causes leaking of the 
response stream.

Since in {{createKey}} and {{getKey}} the {{HttpEntity}} has to be close only 
when we close the stream, the responsibility of consuming response HttpEntity 
is given to the caller. I have updated the javadoc to make it explicit to the 
caller of {{executeHttpRequest}}


> Ozone: OzoneClient: Support for REST protocol
> -
>
> Key: HDFS-12549
> URL: https://issues.apache.org/jira/browse/HDFS-12549
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Attachments: HDFS-12549-HDFS-7240.000.patch, 
> HDFS-12549-HDFS-7240.001.patch, HDFS-12549-HDFS-7240.002.patch, 
> HDFS-12549-HDFS-7240.003.patch, HDFS-12549-HDFS-7240.004.patch
>
>
> Support for REST protocol in OzoneClient. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12779) [READ] Allow cluster id to be specified to the Image generation tool

2017-11-08 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12779:
--
Status: Open  (was: Patch Available)

> [READ] Allow cluster id to be specified to the Image generation tool
> 
>
> Key: HDFS-12779
> URL: https://issues.apache.org/jira/browse/HDFS-12779
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Trivial
> Attachments: HDFS-12779-HDFS-9806.001.patch
>
>
> Setting the cluster id for the FSImage generated for PROVIDED files is 
> required when the Namenode for PROVIDED files is expected to run in 
> federation with other Namenodes that manage local storage/data.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12789) [READ] Image generation tool does not close an opened stream

2017-11-08 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244540#comment-16244540
 ] 

Virajith Jalaparti commented on HDFS-12789:
---

Thanks for taking a look [~elgoiri]. Committing v2 to the HDFS-9806 branch.

> [READ] Image generation tool does not close an opened stream
> 
>
> Key: HDFS-12789
> URL: https://issues.apache.org/jira/browse/HDFS-12789
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12789-HDFS-9806.001.patch, 
> HDFS-12789-HDFS-9806.002.patch
>
>
> Other JIRAs (e.g., HDFS-12671), generate a FindBug issues:
> {code}
> Bug type OBL_UNSATISFIED_OBLIGATION_EXCEPTION_EDGE (click for details) 
> In class org.apache.hadoop.hdfs.server.namenode.ImageWriter
> In method new 
> org.apache.hadoop.hdfs.server.namenode.ImageWriter(ImageWriter$Options)
> Reference type java.io.OutputStream
> 1 instances of obligation remaining
> Obligation to clean up resource created at ImageWriter.java:[line 170] is not 
> discharged
> Remaining obligations: {OutputStream x 1}
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12549) Ozone: OzoneClient: Support for REST protocol

2017-11-08 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDFS-12549:
---
Attachment: HDFS-12549-HDFS-7240.004.patch

> Ozone: OzoneClient: Support for REST protocol
> -
>
> Key: HDFS-12549
> URL: https://issues.apache.org/jira/browse/HDFS-12549
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nanda kumar
>Assignee: Nanda kumar
> Attachments: HDFS-12549-HDFS-7240.000.patch, 
> HDFS-12549-HDFS-7240.001.patch, HDFS-12549-HDFS-7240.002.patch, 
> HDFS-12549-HDFS-7240.003.patch, HDFS-12549-HDFS-7240.004.patch
>
>
> Support for REST protocol in OzoneClient. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12791) NameNode Fsck http Connection can timeout for directories with multiple levels

2017-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244523#comment-16244523
 ] 

Hadoop QA commented on HDFS-12791:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}112m  1s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12791 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896674/HDFS-12791.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5ae343e25933 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e4c220e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22005/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22005/testReport/ |
| 

[jira] [Commented] (HDFS-12790) [SPS]: Rebasing HDFS-10285 branch after HDFS-10467, HDFS-12599 and HDFS-11968 commits

2017-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244513#comment-16244513
 ] 

Hadoop QA commented on HDFS-12790:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10285 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  8m 
14s{color} | {color:red} root in HDFS-10285 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdfs in HDFS-10285 failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-10285 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-hdfs in HDFS-10285 failed. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
49s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-hdfs in HDFS-10285 failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-10285 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 51s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 303 new + 83 unchanged 
- 2 fixed = 386 total (was 85) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}117m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestViewFSStoragePolicyCommands |
|   | hadoop.hdfs.tools.TestWebHDFSStoragePolicyCommands |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HDFS-12790 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12896679/HDFS-12790-HDFS-10285-01.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 818cfda0bc99 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-10285 / 143dd0c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_131 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22007/artifact/out/branch-mvninstall-root.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/22007/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| mvnsite | 

[jira] [Updated] (HDFS-12783) [branch-2] "dfsrouter" should use hdfsScript

2017-11-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12783:
---
Fix Version/s: (was: 2.9.1)
   2.9.0

> [branch-2] "dfsrouter" should use hdfsScript
> 
>
> Key: HDFS-12783
> URL: https://issues.apache.org/jira/browse/HDFS-12783
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>  Labels: RBF
> Fix For: 2.9.0, 2.10.0
>
> Attachments: HDFS-12783-branch-2.patch
>
>
>  *when we start "dfsrouter" with "hadoop-daemon.sh"* it will fail with 
> following error (Found during 2.9 verification)
> brahma@brahma:/opt/hadoop-2.9.0/sbin$ ./hadoop-daemon.sh start dfsrouter
> starting dfsrouter, logging to 
> /opt/hadoop-2.9.0/logs/hadoop-brahma-dfsrouter-brahma.out
> Error: Could not find or load main class dfsrouter 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12052) Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS

2017-11-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244497#comment-16244497
 ] 

Xiao Chen commented on HDFS-12052:
--

Thanks [~3opan] for reporting and fixing this issue! Would you be interested to 
provide a branch-2 fix as well?

> Set SWEBHDFS delegation token kind when ssl is enabled in HttpFS
> 
>
> Key: HDFS-12052
> URL: https://issues.apache.org/jira/browse/HDFS-12052
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: httpfs, webhdfs
>Affects Versions: 2.7.3, 3.0.0-alpha3
>Reporter: Zoran Dimitrijevic
>Assignee: Zoran Dimitrijevic
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12052.00.patch, HDFS-12052.01.patch, 
> HDFS-12052.02.patch, HDFS-12052.03.patch, HDFS-12052.04.patch, 
> HDFS-12052.05.patch, HDFS-12052.06.patch, HDFS-12052.07.patch
>
>
> When httpfs runs with httpfs.ssl.enabled it should return SWEBHDFS delegation 
> tokens. 
> Currently, httpfs returns WEBHDFS delegation "kind" for tokens regardless of 
> whether ssl is enabled or not. If clients directly connect to renew tokens 
> (for example, hdfs dfs) all works because httpfs doesn't check whether token 
> kind is for swebhdfs or webhdfs. However, this breaks when yarn rm needs to 
> renew the token for the job (for example, when running hadoop distcp). Since 
> DT kind is WEBHDFS, rm tries to establish non-ssl connection to httpfs and 
> fails.
> I've tested a simple patch which I'll upload to this jira, and it fixes this 
> issue (hadoop distcp works).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12726) BlockPlacementPolicyDefault's debugLoggingBuilder may not be logged

2017-11-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244492#comment-16244492
 ] 

Xiao Chen commented on HDFS-12726:
--

Hi [~bharatviswa],
Thanks for your interest on this one. Are you actively working on this? I'd 
like to start on this one if you're busy. Thank you.

> BlockPlacementPolicyDefault's debugLoggingBuilder may not be logged
> ---
>
> Key: HDFS-12726
> URL: https://issues.apache.org/jira/browse/HDFS-12726
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: logging
>Reporter: Xiao Chen
>Assignee: Bharat Viswanadham
>  Labels: supportability
>
> During debugging HDFS-12725, {{BlockPlacementPolicyDefault's}} class' 
> {{debugLoggingBuilder}} does a lot of {{get}} and {{append}}, but never 
> {{toString}} and {{LOG.debug}}'ed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12713) [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata and PROVIDED storage metadata

2017-11-08 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244490#comment-16244490
 ] 

Virajith Jalaparti commented on HDFS-12713:
---

Hi [~ehiggs], Yes, I think that makes sense. The {{getBlockPoolID()}} in the 
diff was in the {{Reader}} class but having block pool id as an argument of 
{{getReader}} would also be useful.

> [READ] Refactor FileRegion and BlockAliasMap to separate out HDFS metadata 
> and PROVIDED storage metadata
> 
>
> Key: HDFS-12713
> URL: https://issues.apache.org/jira/browse/HDFS-12713
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Ewan Higgs
> Attachments: HDFS-12713-HDFS-9806.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12459) Fix revert: Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API

2017-11-08 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16244343#comment-16244343
 ] 

Rushabh S Shah commented on HDFS-12459:
---

bq. I don't think so. WebHDFS.md is the doc for webhdfs, not for 
WebHdfsFileSystem. 
I see your point. I think its okay to go with the approach in patch.

I have couple of comments regarding v5 of the patch.
1. +NamenodeWebHdfsMethods.java+
{noformat}
 final String js = JsonUtil.toJsonString("BlockLocations", 
JsonUtil.toJsonMap(locations));
{noformat}
{{JsonUtil}} should have method {{toJsonString#toJsonString(BlockLocations[])}} 
just to be consistent with other methods. Refer to 
{{JsonUtil.toJsonString(BlockStoragePolicy[] storagePolicies)}}
Some of the recently added methods passes the key along with map.
Apologies for not pointing out in the previous review.

2. +TestWebHDFS.java+
{noformat}
   public void testWebHdfsGetBlockLocationsWithStorageType() throws Exception{
 MiniDFSCluster cluster = null;
 final Configuration conf = WebHdfsTestUtil.createConf();
+final int offset = 42;
+final int length = 512;
+final Path path = new Path("/foo");
+byte[] contents = new byte[1024];
+RANDOM.nextBytes(contents);
+try {
+  cluster = new MiniDFSCluster.Builder(conf).numDataNodes(1).build();
+  final WebHdfsFileSystem fs = WebHdfsTestUtil.getWebHdfsFileSystem(conf,
+  WebHdfsConstants.WEBHDFS_SCHEME);
+  try (OutputStream os = fs.create(path)) {
+os.write(contents);
+  }
+  BlockLocation[] locations = fs.getFileBlockLocations(path, offset,
+  length);
+  for (BlockLocation location: locations) {
+StorageType[] storageTypes = location.getStorageTypes();
+Assert.assertTrue(storageTypes != null && storageTypes.length > 0 &&
+storageTypes[0] == StorageType.DISK);
+  }
+} finally {
+  if (cluster != null) {
+cluster.shutdown();
+  }
+}
+  }
{noformat}

>From the diff, it looks like you changed 
>{{testWebHdfsGetBlockLocationsWithStorageType}} method which is not correct.
When I applied the change and did git blame, it shows the same behavior.
Can you please fix that.




> Fix revert: Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
> 
>
> Key: HDFS-12459
> URL: https://issues.apache.org/jira/browse/HDFS-12459
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12459.001.patch, HDFS-12459.002.patch, 
> HDFS-12459.003.patch, HDFS-12459.004.patch, HDFS-12459.005.patch
>
>
> HDFS-11156 was reverted because the implementation was non optimal, based on 
> the suggestion from [~shahrs87], we should avoid creating a dfs client to get 
> block locations because that create extra RPC call. Instead we should use 
> {{NamenodeProtocols#getBlockLocations}} then covert {{LocatedBlocks}} to 
> {{BlockLocation[]}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12756) Ozone: Add datanodeID to heartbeat responses and container protocol

2017-11-08 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12756:

Attachment: HDFS-12756-HDFS-7240.004.patch

Address checkstyle issues and rebased the patch to latest top of the tree.

> Ozone: Add datanodeID to heartbeat responses and container protocol
> ---
>
> Key: HDFS-12756
> URL: https://issues.apache.org/jira/browse/HDFS-12756
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: HDFS-12756-HDFS-7240.001.patch, 
> HDFS-12756-HDFS-7240.002.patch, HDFS-12756-HDFS-7240.003.patch, 
> HDFS-12756-HDFS-7240.004.patch
>
>
> if we have datanode ID in the HBs responses and commands send to datanode, we 
> will be able to do additional sanity checking on datanode before executing 
> the command. This is also very helpful in creating a MiniOzoneCluster with 
> 1000s of simulated nodes. This is needed for scale based unit tests of SCM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >