[jira] [Commented] (HDFS-13856) RBF: RouterAdmin should support dfsrouteradmin -refresh command

2018-12-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726506#comment-16726506
 ] 

Hadoop QA commented on HDFS-13856:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
49s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m  
6s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13856 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952629/HDFS-13856-HDFS-13891.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c72cb5e5b630 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / c9ebaf2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25839/testReport/ |
| Max. process+thread count | 1370 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25839/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: RouterAdmin should support dfsrouteradmin -refresh command
> 

[jira] [Commented] (HDFS-14161) RBF: Throw StandbyException instead of IOException so that client can retry when can not get connection

2018-12-20 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726487#comment-16726487
 ] 

Fei Hui commented on HDFS-14161:


[~elgoiri] Upload v004 patch. 
Set msg value according to failover and log ioe.getMessage when 
StandbyException occurs
{code:java}
if (failover) {
  // All namenodes were unavailable or in standby
  msg = "No namenode available to invoke " + method.getName() + " " +
  Arrays.toString(params);
} else {
  msg = "Internal error occurs in router";
}
{code}


> RBF: Throw StandbyException instead of IOException so that client can retry 
> when can not get connection
> ---
>
> Key: HDFS-14161
> URL: https://issues.apache.org/jira/browse/HDFS-14161
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14161-HDFS-13891.001.patch, 
> HDFS-14161-HDFS-13891.002.patch, HDFS-14161-HDFS-13891.003.patch, 
> HDFS-14161-HDFS-13891.004.patch, HDFS-14161.001.patch
>
>
> Hive Client may hang when get IOException, stack follows
> {code:java}
> Exception in thread "Thread-150" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot get a 
> connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:554)
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:74)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot 
> get a connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1503)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1441)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>   at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
>   at 
> 

[jira] [Updated] (HDFS-14161) RBF: Throw StandbyException instead of IOException so that client can retry when can not get connection

2018-12-20 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14161:
---
Attachment: HDFS-14161-HDFS-13891.004.patch

> RBF: Throw StandbyException instead of IOException so that client can retry 
> when can not get connection
> ---
>
> Key: HDFS-14161
> URL: https://issues.apache.org/jira/browse/HDFS-14161
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14161-HDFS-13891.001.patch, 
> HDFS-14161-HDFS-13891.002.patch, HDFS-14161-HDFS-13891.003.patch, 
> HDFS-14161-HDFS-13891.004.patch, HDFS-14161.001.patch
>
>
> Hive Client may hang when get IOException, stack follows
> {code:java}
> Exception in thread "Thread-150" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot get a 
> connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:554)
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:74)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot 
> get a connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1503)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1441)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>   at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:775)
>   at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> 

[jira] [Commented] (HDFS-13856) RBF: RouterAdmin should support dfsrouteradmin -refresh command

2018-12-20 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726470#comment-16726470
 ] 

Fei Hui commented on HDFS-13856:


[~elgoiri] Rebase the code on branch HDFS-13891. Change "-refresh" to 
"-refreshRouterArgs" because "-refresh" exists.

> RBF: RouterAdmin should support dfsrouteradmin -refresh command
> ---
>
> Key: HDFS-13856
> URL: https://issues.apache.org/jira/browse/HDFS-13856
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.1.0, 2.9.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13856-HDFS-13891.001.patch, HDFS-13856.001.patch, 
> HDFS-13856.002.patch
>
>
> Like namenode router should support refresh policy individually. For example, 
> we have implemented simple password authentication per rpc connection. The 
> password dict can be refreshed by generic refresh policy. We also want to 
> support this in RouterAdminServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13856) RBF: RouterAdmin should support dfsrouteradmin -refresh command

2018-12-20 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-13856:
---
Attachment: HDFS-13856-HDFS-13891.001.patch

> RBF: RouterAdmin should support dfsrouteradmin -refresh command
> ---
>
> Key: HDFS-13856
> URL: https://issues.apache.org/jira/browse/HDFS-13856
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.1.0, 2.9.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13856-HDFS-13891.001.patch, HDFS-13856.001.patch, 
> HDFS-13856.002.patch
>
>
> Like namenode router should support refresh policy individually. For example, 
> we have implemented simple password authentication per rpc connection. The 
> password dict can be refreshed by generic refresh policy. We also want to 
> support this in RouterAdminServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-937) Create an S3 Auth Table

2018-12-20 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-937:
---
Attachment: HDDS-937-HDDS-4.01.patch

> Create an S3 Auth Table
> ---
>
> Key: HDDS-937
> URL: https://issues.apache.org/jira/browse/HDDS-937
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Anu Engineer
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-937-HDDS-4.00.patch, HDDS-937-HDDS-4.01.patch, 
> HDDS-937.001.patch
>
>
> Add a S3 auth table in Ozone Manager. This table allows us to create mapping 
> from S3 keys to UGIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-937) Create an S3 Auth Table

2018-12-20 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726448#comment-16726448
 ] 

Bharat Viswanadham edited comment on HDDS-937 at 12/21/18 5:20 AM:
---

Hi [~dineshchitlangia]

Thanks for the patch.

*Few minor NIT:*
 # can we change the table name to s3SecretTable to be consistent with other 
table naming?
 # s3SecretsTable -> s3SecretTable in all the places.

 

Now instead of Table we can use TypedTable refer HDDS-748 and 
HDDS-864.

I think that is missing in the HDDS-4 branch, and we can do that in further 
Jira  HDDS-938 where we actually use this table.

 


was (Author: bharatviswa):
Hi [~dineshchitlangia]

Thanks for the patch.

*One minor NIT:*

can we change the table name to s3SecretTable to be consistent with other table 
naming?

 

Now instead of Table we can use TypedTable refer HDDS-748 and 
HDDS-864.

I think that is missing in the HDDS-4 branch, and we can do that in further 
Jira  HDDS-938 where we actually use this table.

 

> Create an S3 Auth Table
> ---
>
> Key: HDDS-937
> URL: https://issues.apache.org/jira/browse/HDDS-937
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Anu Engineer
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-937-HDDS-4.00.patch, HDDS-937.001.patch
>
>
> Add a S3 auth table in Ozone Manager. This table allows us to create mapping 
> from S3 keys to UGIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13856) RBF: RouterAdmin should support dfsrouteradmin -refresh command

2018-12-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726453#comment-16726453
 ] 

Hadoop QA commented on HDFS-13856:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
39s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13856 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952623/HDFS-13856.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ff8606050bcb 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f659485 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25838/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25838/testReport/ |
| Max. process+thread count | 1081 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25838/console |
| Powered by | Apache Yetus 0.8.0   

[jira] [Commented] (HDDS-937) Create an S3 Auth Table

2018-12-20 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726448#comment-16726448
 ] 

Bharat Viswanadham commented on HDDS-937:
-

Hi [~dineshchitlangia]

Thanks for the patch.

*Few minor NIT:*

can we change the table name to s3SecretTable to be consistent with other table 
naming?

 

Now instead of Table we can use TypedTable refer HDDS-748 and 
HDDS-864.

I think that is missing in the HDDS-4 branch, and we can do that in further 
Jira  HDDS-938 where we actually use this table.

 

> Create an S3 Auth Table
> ---
>
> Key: HDDS-937
> URL: https://issues.apache.org/jira/browse/HDDS-937
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Anu Engineer
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-937-HDDS-4.00.patch, HDDS-937.001.patch
>
>
> Add a S3 auth table in Ozone Manager. This table allows us to create mapping 
> from S3 keys to UGIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-937) Create an S3 Auth Table

2018-12-20 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726448#comment-16726448
 ] 

Bharat Viswanadham edited comment on HDDS-937 at 12/21/18 5:19 AM:
---

Hi [~dineshchitlangia]

Thanks for the patch.

*One minor NIT:*

can we change the table name to s3SecretTable to be consistent with other table 
naming?

 

Now instead of Table we can use TypedTable refer HDDS-748 and 
HDDS-864.

I think that is missing in the HDDS-4 branch, and we can do that in further 
Jira  HDDS-938 where we actually use this table.

 


was (Author: bharatviswa):
Hi [~dineshchitlangia]

Thanks for the patch.

*Few minor NIT:*

can we change the table name to s3SecretTable to be consistent with other table 
naming?

 

Now instead of Table we can use TypedTable refer HDDS-748 and 
HDDS-864.

I think that is missing in the HDDS-4 branch, and we can do that in further 
Jira  HDDS-938 where we actually use this table.

 

> Create an S3 Auth Table
> ---
>
> Key: HDDS-937
> URL: https://issues.apache.org/jira/browse/HDDS-937
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Anu Engineer
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-937-HDDS-4.00.patch, HDDS-937.001.patch
>
>
> Add a S3 auth table in Ozone Manager. This table allows us to create mapping 
> from S3 keys to UGIs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13856) RBF: RouterAdmin should support dfsrouteradmin -refresh command

2018-12-20 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13856:
---
Issue Type: Sub-task  (was: Improvement)
Parent: HDFS-13891

> RBF: RouterAdmin should support dfsrouteradmin -refresh command
> ---
>
> Key: HDFS-13856
> URL: https://issues.apache.org/jira/browse/HDFS-13856
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.1.0, 2.9.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13856.001.patch, HDFS-13856.002.patch
>
>
> Like namenode router should support refresh policy individually. For example, 
> we have implemented simple password authentication per rpc connection. The 
> password dict can be refreshed by generic refresh policy. We also want to 
> support this in RouterAdminServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13856) RBF: RouterAdmin should support dfsrouteradmin -refresh command

2018-12-20 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726447#comment-16726447
 ] 

Íñigo Goiri commented on HDFS-13856:


I think this should be part of HDFS-13891.
Moving it there.
Can you rebase to that branch?

> RBF: RouterAdmin should support dfsrouteradmin -refresh command
> ---
>
> Key: HDFS-13856
> URL: https://issues.apache.org/jira/browse/HDFS-13856
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.1.0, 2.9.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13856.001.patch, HDFS-13856.002.patch
>
>
> Like namenode router should support refresh policy individually. For example, 
> we have implemented simple password authentication per rpc connection. The 
> password dict can be refreshed by generic refresh policy. We also want to 
> support this in RouterAdminServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14161) RBF: Throw StandbyException instead of IOException so that client can retry when can not get connection

2018-12-20 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726446#comment-16726446
 ] 

Íñigo Goiri commented on HDFS-14161:


For the error handling, I'm not sure that hiding it with "Internal error occurs 
in router" is the most intuitive.
I would leave as it was but instead of just having:
{code}
if (ioe instanceof StandbyException) {
  LOG.error("{} {} at {} is in Standby", nsId, nnId, addr);
} else {
  LOG.error("{} {} at {} error: \"{}\"",
  nsId, nnId, addr, ioe.getMessage());
}
{code}
I would do:
{code}
if (ioe instanceof StandbyException) {
  LOG.error("{} {} at {} is in Standby: {}",);
  nsId, nnId, addr, ioe.getMessage());
} else {
  LOG.error("{} {} at {} error: \"{}\"",
  nsId, nnId, addr, ioe.getMessage());
}
{code}


> RBF: Throw StandbyException instead of IOException so that client can retry 
> when can not get connection
> ---
>
> Key: HDFS-14161
> URL: https://issues.apache.org/jira/browse/HDFS-14161
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14161-HDFS-13891.001.patch, 
> HDFS-14161-HDFS-13891.002.patch, HDFS-14161-HDFS-13891.003.patch, 
> HDFS-14161.001.patch
>
>
> Hive Client may hang when get IOException, stack follows
> {code:java}
> Exception in thread "Thread-150" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot get a 
> connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:554)
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:74)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot 
> get a connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at 

[jira] [Updated] (HDFS-14161) RBF: Throw StandbyException instead of IOException so that client can retry when can not get connection

2018-12-20 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14161:
---
Summary: RBF: Throw StandbyException instead of IOException so that client 
can retry when can not get connection  (was: RBF: Throw RetriableException 
instead of IOException so that client can retry when can not get connection)

> RBF: Throw StandbyException instead of IOException so that client can retry 
> when can not get connection
> ---
>
> Key: HDFS-14161
> URL: https://issues.apache.org/jira/browse/HDFS-14161
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14161-HDFS-13891.001.patch, 
> HDFS-14161-HDFS-13891.002.patch, HDFS-14161-HDFS-13891.003.patch, 
> HDFS-14161.001.patch
>
>
> Hive Client may hang when get IOException, stack follows
> {code:java}
> Exception in thread "Thread-150" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot get a 
> connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:554)
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:74)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot 
> get a connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1503)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1441)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>   at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:775)
>   at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>   at 
> 

[jira] [Commented] (HDDS-921) Add JVM pause monitor to Ozone Daemons (OM, SCM and Datanodes)

2018-12-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726442#comment-16726442
 ] 

Hadoop QA commented on HDDS-921:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 27m 47s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
48s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.TestContainerReplication |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-921 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952624/HDDS-921.00.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux ab7766fd6c60 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / f659485 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1977/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1977/testReport/ |
| Max. process+thread count | 1099 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service hadoop-hdds/server-scm 
hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1977/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add JVM pause monitor to Ozone Daemons (OM, SCM and Datanodes)
> --
>
> Key: HDDS-921
> URL: https://issues.apache.org/jira/browse/HDDS-921
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Bharat Viswanadham
>

[jira] [Commented] (HDFS-14164) Namenode should not be started if safemode threshold is out of boundary

2018-12-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726441#comment-16726441
 ] 

Hadoop QA commented on HDFS-14164:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}129m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestHDFSServerPorts |
|   | hadoop.hdfs.server.namenode.TestBackupNode |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.TestListCorruptFileBlocks |
|   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14164 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952615/HDFS-14164-001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4c2ad0f907eb 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a80d321 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25836/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25836/testReport/ |
| Max. process+thread count 

[jira] [Commented] (HDFS-14161) RBF: Throw RetriableException instead of IOException so that client can retry when can not get connection

2018-12-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726435#comment-16726435
 ] 

Hadoop QA commented on HDFS-14161:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
58s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 45s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterAllResolver |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14161 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952617/HDFS-14161-HDFS-13891.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cf9d68c7fa79 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / c9ebaf2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25837/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25837/testReport/ |
| Max. process+thread count | 1337 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 

[jira] [Updated] (HDFS-13856) RBF: RouterAdmin should support dfsrouteradmin -refresh command

2018-12-20 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-13856:
---
Attachment: HDFS-13856.002.patch

> RBF: RouterAdmin should support dfsrouteradmin -refresh command
> ---
>
> Key: HDFS-13856
> URL: https://issues.apache.org/jira/browse/HDFS-13856
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.1.0, 2.9.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13856.001.patch, HDFS-13856.002.patch
>
>
> Like namenode router should support refresh policy individually. For example, 
> we have implemented simple password authentication per rpc connection. The 
> password dict can be refreshed by generic refresh policy. We also want to 
> support this in RouterAdminServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-921) Add JVM pause monitor to Ozone Daemons (OM, SCM and Datanodes)

2018-12-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-921:

Attachment: HDDS-921.00.patch

> Add JVM pause monitor to Ozone Daemons (OM, SCM and Datanodes)
> --
>
> Key: HDDS-921
> URL: https://issues.apache.org/jira/browse/HDDS-921
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-921.00.patch
>
>
> Currently JVM pause monitor is not added to Ozone daemons like OM, SCM and 
> Ozone Datanodes. This jira proposes to add JCM pause monitor to these daemons.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-921) Add JVM pause monitor to Ozone Daemons (OM, SCM and Datanodes)

2018-12-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-921:

Status: Patch Available  (was: Open)

> Add JVM pause monitor to Ozone Daemons (OM, SCM and Datanodes)
> --
>
> Key: HDDS-921
> URL: https://issues.apache.org/jira/browse/HDDS-921
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: OM, SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-921.00.patch
>
>
> Currently JVM pause monitor is not added to Ozone daemons like OM, SCM and 
> Ozone Datanodes. This jira proposes to add JCM pause monitor to these daemons.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13856) RBF: RouterAdmin should support dfsrouteradmin -refresh command

2018-12-20 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726429#comment-16726429
 ] 

Fei Hui commented on HDFS-13856:


[~elgoiri] rebase the code on branch trunk

> RBF: RouterAdmin should support dfsrouteradmin -refresh command
> ---
>
> Key: HDFS-13856
> URL: https://issues.apache.org/jira/browse/HDFS-13856
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: federation, hdfs
>Affects Versions: 3.0.0, 3.1.0, 2.9.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13856.001.patch, HDFS-13856.002.patch
>
>
> Like namenode router should support refresh policy individually. For example, 
> we have implemented simple password authentication per rpc connection. The 
> password dict can be refreshed by generic refresh policy. We also want to 
> support this in RouterAdminServer. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14164) Namenode should not be started if safemode threshold is out of boundary

2018-12-20 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726424#comment-16726424
 ] 

Bharat Viswanadham commented on HDFS-14164:
---

Hi [~kpalanisamy]

Thank You for reporting and fixing this issue.

Overall patch LGTM.

Few minor comments:
 # We can declare float variables, instead of double.
 # We can simply if condition as if (threshold <= minThreshold || threshold >= 
maxThreshold)
 # We can use in tests GenericTestutils.assertExceptionContains and verify 
message.

> Namenode should not be started if safemode threshold is out of boundary
> ---
>
> Key: HDFS-14164
> URL: https://issues.apache.org/jira/browse/HDFS-14164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.3, 3.1.1
> Environment: #apache hadoop-3.1.1
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
>  Labels: patch
> Attachments: HDFS-14164-001.patch
>
>
> Mistakenly,  User has configured safemode 
> threshold(dfs.namenode.safemode.threshold-pct) to 090 instead of 0.90. Due to 
> this change,  UI has the incorrect summary and it never turns to be out of 
> safemode until manual intervention. Because total block count will never 
> match with an additional block count, which is to be reported.
>  I.e
> {code:java}
> Wrong setting: dfs.namenode.safemode.threshold-pct=090
> Summary:
> Safe mode is ON. The reported blocks 0 needs additional 360 blocks to reach 
> the threshold 90. of total blocks 4. The number of live datanodes 3 has 
> reached the minimum number 0. Safe mode will be turned off automatically once 
> the thresholds have been reached.
> 10 files and directories, 4 blocks (4 replicated blocks, 0 erasure coded 
> block groups) = 14 total filesystem object(s).
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14161) RBF: Throw RetriableException instead of IOException so that client can retry when can not get connection

2018-12-20 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14161:
---
Attachment: HDFS-14161-HDFS-13891.003.patch

> RBF: Throw RetriableException instead of IOException so that client can retry 
> when can not get connection
> -
>
> Key: HDFS-14161
> URL: https://issues.apache.org/jira/browse/HDFS-14161
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14161-HDFS-13891.001.patch, 
> HDFS-14161-HDFS-13891.002.patch, HDFS-14161-HDFS-13891.003.patch, 
> HDFS-14161.001.patch
>
>
> Hive Client may hang when get IOException, stack follows
> {code:java}
> Exception in thread "Thread-150" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot get a 
> connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:554)
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:74)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot 
> get a connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1503)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1441)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>   at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:775)
>   at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> 

[jira] [Commented] (HDFS-14161) RBF: Throw RetriableException instead of IOException so that client can retry when can not get connection

2018-12-20 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726406#comment-16726406
 ] 

Fei Hui commented on HDFS-14161:


[~elgoiri] Upload v003 patch, improve the message in RouterRpcClient#420

> RBF: Throw RetriableException instead of IOException so that client can retry 
> when can not get connection
> -
>
> Key: HDFS-14161
> URL: https://issues.apache.org/jira/browse/HDFS-14161
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14161-HDFS-13891.001.patch, 
> HDFS-14161-HDFS-13891.002.patch, HDFS-14161-HDFS-13891.003.patch, 
> HDFS-14161.001.patch
>
>
> Hive Client may hang when get IOException, stack follows
> {code:java}
> Exception in thread "Thread-150" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot get a 
> connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:554)
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:74)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot 
> get a connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1503)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1441)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>   at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:775)
>   at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at 

[jira] [Updated] (HDFS-14164) Namenode should not be started if safemode threshold is out of boundary

2018-12-20 Thread Karthik Palanisamy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HDFS-14164:
--
Description: 
Mistakenly,  User has configured safemode 
threshold(dfs.namenode.safemode.threshold-pct) to 090 instead of 0.90. Due to 
this change,  UI has the incorrect summary and it never turns to be out of 
safemode until manual intervention. Because total block count will never match 
with an additional block count, which is to be reported.

 I.e
{code:java}
Wrong setting: dfs.namenode.safemode.threshold-pct=090

Summary:

Safe mode is ON. The reported blocks 0 needs additional 360 blocks to reach the 
threshold 90. of total blocks 4. The number of live datanodes 3 has reached 
the minimum number 0. Safe mode will be turned off automatically once the 
thresholds have been reached.

10 files and directories, 4 blocks (4 replicated blocks, 0 erasure coded block 
groups) = 14 total filesystem object(s).

{code}
 

  was:
Mistakenly,  User has configured safemode 
threshold(dfs.namenode.safemode.threshold-pct) to 090 instead of 0.90. Due to 
this change,  UI has the incorrect summary and it never turns to be out of 
safemode until manual intervention. Because total block count will never match 
with an additional block count, which is to be reported.

 

i.e

Wrong setting: dfs.namenode.safemode.threshold-pct=090

Summary:

Safe mode is ON. The reported blocks 0 needs additional 360 blocks to reach the 
threshold 90. of total blocks 4. The number of live datanodes 3 has reached 
the minimum number 0. Safe mode will be turned off automatically once the 
thresholds have been reached.

10 files and directories, 4 blocks (4 replicated blocks, 0 erasure coded block 
groups) = 14 total filesystem object(s).


> Namenode should not be started if safemode threshold is out of boundary
> ---
>
> Key: HDFS-14164
> URL: https://issues.apache.org/jira/browse/HDFS-14164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.3, 3.1.1
> Environment: #apache hadoop-3.1.1
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
>  Labels: patch
> Attachments: HDFS-14164-001.patch
>
>
> Mistakenly,  User has configured safemode 
> threshold(dfs.namenode.safemode.threshold-pct) to 090 instead of 0.90. Due to 
> this change,  UI has the incorrect summary and it never turns to be out of 
> safemode until manual intervention. Because total block count will never 
> match with an additional block count, which is to be reported.
>  I.e
> {code:java}
> Wrong setting: dfs.namenode.safemode.threshold-pct=090
> Summary:
> Safe mode is ON. The reported blocks 0 needs additional 360 blocks to reach 
> the threshold 90. of total blocks 4. The number of live datanodes 3 has 
> reached the minimum number 0. Safe mode will be turned off automatically once 
> the thresholds have been reached.
> 10 files and directories, 4 blocks (4 replicated blocks, 0 erasure coded 
> block groups) = 14 total filesystem object(s).
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14162) Balancer should work with ObserverNode

2018-12-20 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726395#comment-16726395
 ] 

Erik Krogen commented on HDFS-14162:


I attached a quick-and-dirty demonstration patch 
([^HDFS-14162-HDFS-12943.wip0.patch]) showing how we can solve this by making 
{{NameNodeHAProxyFactory}} aware of {{AlignmentContext}}, and making 
{{ObserverReadProxyProvider}} keep track of a {{ClientProtocol}} proxy in 
addition to its main proxy. In the normal case of ORPP with {{ClientProtocol}}, 
the same protocol object is used; in the case of a different protocol (e.g. 
{{NameNodeProtocol}}), it instantiates an extra client proxy to use for 
{{getHAServiceState()}}. It's not the cleanest way of solving the problem -- I 
don't really like needing to directly instantiate the {{ClientProtocol}} 
instead of being able to use the factory -- but it works.

I also had to modify the test slightly -- the way the DataNodes were created 
didn't seem to work properly.

> Balancer should work with ObserverNode
> --
>
> Key: HDFS-14162
> URL: https://issues.apache.org/jira/browse/HDFS-14162
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-14162-HDFS-12943.wip0.patch, 
> testBalancerWithObserver.patch
>
>
> Balancer provides a substantial RPC load on NameNode. It would be good to 
> divert Balancer RPCs {{getBlocks()}}, etc. to ObserverNode. The main problem 
> is that Balancer uses {{NamenodeProtocol}}, while ORPP currently supports 
> only {{ClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14164) Namenode should not be started if safemode threshold is out of boundary

2018-12-20 Thread Karthik Palanisamy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HDFS-14164:
--
Attachment: HDFS-14164-001.patch
Status: Patch Available  (was: Open)

> Namenode should not be started if safemode threshold is out of boundary
> ---
>
> Key: HDFS-14164
> URL: https://issues.apache.org/jira/browse/HDFS-14164
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.1, 2.7.3
> Environment: #apache hadoop-3.1.1
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
>  Labels: patch
> Attachments: HDFS-14164-001.patch
>
>
> Mistakenly,  User has configured safemode 
> threshold(dfs.namenode.safemode.threshold-pct) to 090 instead of 0.90. Due to 
> this change,  UI has the incorrect summary and it never turns to be out of 
> safemode until manual intervention. Because total block count will never 
> match with an additional block count, which is to be reported.
>  
> i.e
> Wrong setting: dfs.namenode.safemode.threshold-pct=090
> Summary:
> Safe mode is ON. The reported blocks 0 needs additional 360 blocks to reach 
> the threshold 90. of total blocks 4. The number of live datanodes 3 has 
> reached the minimum number 0. Safe mode will be turned off automatically once 
> the thresholds have been reached.
> 10 files and directories, 4 blocks (4 replicated blocks, 0 erasure coded 
> block groups) = 14 total filesystem object(s).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14164) Namenode should not be started if safemode threshold is out of boundary

2018-12-20 Thread Karthik Palanisamy (JIRA)
Karthik Palanisamy created HDFS-14164:
-

 Summary: Namenode should not be started if safemode threshold is 
out of boundary
 Key: HDFS-14164
 URL: https://issues.apache.org/jira/browse/HDFS-14164
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.1.1, 2.7.3
 Environment: #apache hadoop-3.1.1
Reporter: Karthik Palanisamy
Assignee: Karthik Palanisamy


Mistakenly,  User has configured safemode 
threshold(dfs.namenode.safemode.threshold-pct) to 090 instead of 0.90. Due to 
this change,  UI has the incorrect summary and it never turns to be out of 
safemode until manual intervention. Because total block count will never match 
with an additional block count, which is to be reported.

 

i.e

Wrong setting: dfs.namenode.safemode.threshold-pct=090

Summary:

Safe mode is ON. The reported blocks 0 needs additional 360 blocks to reach the 
threshold 90. of total blocks 4. The number of live datanodes 3 has reached 
the minimum number 0. Safe mode will be turned off automatically once the 
thresholds have been reached.

10 files and directories, 4 blocks (4 replicated blocks, 0 erasure coded block 
groups) = 14 total filesystem object(s).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14162) Balancer should work with ObserverNode

2018-12-20 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-14162:
---
Attachment: HDFS-14162-HDFS-12943.wip0.patch

> Balancer should work with ObserverNode
> --
>
> Key: HDFS-14162
> URL: https://issues.apache.org/jira/browse/HDFS-14162
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-14162-HDFS-12943.wip0.patch, 
> testBalancerWithObserver.patch
>
>
> Balancer provides a substantial RPC load on NameNode. It would be good to 
> divert Balancer RPCs {{getBlocks()}}, etc. to ObserverNode. The main problem 
> is that Balancer uses {{NamenodeProtocol}}, while ORPP currently supports 
> only {{ClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14161) RBF: Throw RetriableException instead of IOException so that client can retry when can not get connection

2018-12-20 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726375#comment-16726375
 ] 

Íñigo Goiri commented on HDFS-14161:


OK, now I remember.
I tried wrapping StandbyException and report something like 
RouterOverloadedException but that didn't work because the client check 
specifically for StandbyException and not a child of it.
So, let's return StandbyException for both.

The onyl thing is that we should improve the message in RouterRpcClient#420 to 
distinguish between a NameNode really being in Standby or us issuing a 
StandbyException to retry in another Router.

> RBF: Throw RetriableException instead of IOException so that client can retry 
> when can not get connection
> -
>
> Key: HDFS-14161
> URL: https://issues.apache.org/jira/browse/HDFS-14161
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14161-HDFS-13891.001.patch, 
> HDFS-14161-HDFS-13891.002.patch, HDFS-14161.001.patch
>
>
> Hive Client may hang when get IOException, stack follows
> {code:java}
> Exception in thread "Thread-150" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot get a 
> connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:554)
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:74)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot 
> get a connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1503)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1441)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>   at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
>   

[jira] [Commented] (HDFS-14161) RBF: Throw RetriableException instead of IOException so that client can retry when can not get connection

2018-12-20 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726360#comment-16726360
 ] 

Fei Hui commented on HDFS-14161:


[~elgoiri] The purpose of throwing Exception is that client have chance to 
retry and not fail. Throwing RetriableExceptio makes client access the same 
router again and  throwing StandbyException makes client access another router. 
Maybe Router reporting Standby state is reasonable when it could not service.On 
the other hand if throwing RetriableException, client will retry seconds later. 
Both methods are good, do you think which one is better?

> RBF: Throw RetriableException instead of IOException so that client can retry 
> when can not get connection
> -
>
> Key: HDFS-14161
> URL: https://issues.apache.org/jira/browse/HDFS-14161
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14161-HDFS-13891.001.patch, 
> HDFS-14161-HDFS-13891.002.patch, HDFS-14161.001.patch
>
>
> Hive Client may hang when get IOException, stack follows
> {code:java}
> Exception in thread "Thread-150" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot get a 
> connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:554)
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:74)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot 
> get a connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1503)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1441)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>   at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
>   at 
> 

[jira] [Commented] (HDDS-916) MultipartUpload: Complete Multipart upload request

2018-12-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726357#comment-16726357
 ] 

Hadoop QA commented on HDDS-916:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 38m 20s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
18s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-916 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952599/HDDS-916.06.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux e553c72923a5 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 7affa30 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1975/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1975/testReport/ |
| Max. process+thread count | 1078 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1975/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> MultipartUpload: Complete Multipart upload request
> --
>
> Key: HDDS-916
> URL: https://issues.apache.org/jira/browse/HDDS-916
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-916.01.patch, HDDS-916.02.patch, HDDS-916.03.patch, 
> HDDS-916.04.patch, 

[jira] [Commented] (HDDS-930) Multipart Upload: Abort multiupload request

2018-12-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726350#comment-16726350
 ] 

Hadoop QA commented on HDDS-930:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDDS-930 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-930 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952603/HDDS-930.02.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1976/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Multipart Upload: Abort multiupload request
> ---
>
> Key: HDDS-930
> URL: https://issues.apache.org/jira/browse/HDDS-930
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-930.00.patch, HDDS-930.01.patch, HDDS-930.02.patch, 
> HDDS-930.03.patch
>
>
> This Jira is to have abort multipart upload request, which cancels the 
> multipart upload.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadAbort.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-947) Implement OzoneManager State Machine

2018-12-20 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726348#comment-16726348
 ] 

Hanisha Koneru commented on HDDS-947:
-

HDDS-947.000.patch is an initial patch. I will add unit tests in the next patch.

> Implement OzoneManager State Machine
> 
>
> Key: HDDS-947
> URL: https://issues.apache.org/jira/browse/HDDS-947
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-947.000.patch
>
>
> OM Ratis server would call OM State Machine to apply the committed 
> transactions. The State Machine processes the transaction and updates the 
> state of OzoneManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-947) Implement OzoneManager State Machine

2018-12-20 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-947:

Attachment: HDDS-947.000.patch

> Implement OzoneManager State Machine
> 
>
> Key: HDDS-947
> URL: https://issues.apache.org/jira/browse/HDDS-947
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-947.000.patch
>
>
> OM Ratis server would call OM State Machine to apply the committed 
> transactions. The State Machine processes the transaction and updates the 
> state of OzoneManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-930) Multipart Upload: Abort multiupload request

2018-12-20 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726346#comment-16726346
 ] 

Bharat Viswanadham commented on HDDS-930:
-

Fixed check style issues.

> Multipart Upload: Abort multiupload request
> ---
>
> Key: HDDS-930
> URL: https://issues.apache.org/jira/browse/HDDS-930
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-930.00.patch, HDDS-930.01.patch, HDDS-930.02.patch, 
> HDDS-930.03.patch
>
>
> This Jira is to have abort multipart upload request, which cancels the 
> multipart upload.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadAbort.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-930) Multipart Upload: Abort multiupload request

2018-12-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-930:

Attachment: HDDS-930.02.patch
HDDS-930.03.patch

> Multipart Upload: Abort multiupload request
> ---
>
> Key: HDDS-930
> URL: https://issues.apache.org/jira/browse/HDDS-930
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-930.00.patch, HDDS-930.01.patch, HDDS-930.02.patch, 
> HDDS-930.03.patch
>
>
> This Jira is to have abort multipart upload request, which cancels the 
> multipart upload.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadAbort.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-916) MultipartUpload: Complete Multipart upload request

2018-12-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-916:

Attachment: HDDS-916.06.patch

> MultipartUpload: Complete Multipart upload request
> --
>
> Key: HDDS-916
> URL: https://issues.apache.org/jira/browse/HDDS-916
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-916.01.patch, HDDS-916.02.patch, HDDS-916.03.patch, 
> HDDS-916.04.patch, HDDS-916.05.patch, HDDS-916.06.patch
>
>
> This Jira is to support completeMultipartUpload request in ozone.
> This request will stitch all the parts and add the entry into the keyTable, 
> to make it visible in ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-930) Multipart Upload: Abort multiupload request

2018-12-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726306#comment-16726306
 ] 

Hadoop QA commented on HDDS-930:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} root: The patch generated 1 new + 2 unchanged - 
0 fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 32s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
42s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.s3.endpoint.TestRootList |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-930 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952592/HDDS-930.01.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux dc8fca2d4c32 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / a668f8e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1974/artifact/out/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1974/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1974/testReport/ |
| Max. process+thread count | 198 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1974/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Multipart Upload: Abort multiupload request
> ---
>
> Key: HDDS-930
> URL: https://issues.apache.org/jira/browse/HDDS-930
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> 

[jira] [Commented] (HDFS-13965) hadoop.security.kerberos.ticket.cache.path setting is not honored when KMS encryption is enabled.

2018-12-20 Thread LOKESKUMAR VIJAYAKUMAR (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726300#comment-16726300
 ] 

LOKESKUMAR VIJAYAKUMAR commented on HDFS-13965:
---

Hello [~knanasi],

Thanks for checking this issue.

Our software is implemented using the corresponding C APIs provided by libhdfs, 
 Our service runs as root user and when connecting to hadoop cluster, we do 
kerberos login as the hadoop user and use that ticket cache to access the data. 
Will it be possible to fix this so that the API works as expected in all cases?

Thanks,
Lokes


> hadoop.security.kerberos.ticket.cache.path setting is not honored when KMS 
> encryption is enabled.
> -
>
> Key: HDFS-13965
> URL: https://issues.apache.org/jira/browse/HDFS-13965
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, kms
>Affects Versions: 2.7.3, 2.7.7
>Reporter: LOKESKUMAR VIJAYAKUMAR
>Assignee: Kitti Nanasi
>Priority: Major
>
> _We use the *+hadoop.security.kerberos.ticket.cache.path+* setting to provide 
> a custom kerberos cache path for all hadoop operations to be run as specified 
> user. But this setting is not honored when KMS encryption is enabled._
> _The below program to read a file works when KMS encryption is not enabled, 
> but it fails when the KMS encryption is enabled._
> _Looks like *hadoop.security.kerberos.ticket.cache.path* setting is not 
> honored by *createConnection on KMSClientProvider.java.*_
>  
> HadoopTest.java (CLASSPATH needs to be set to compile and run)
>  
> import java.io.InputStream;
> import java.net.URI;
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.fs.FileSystem;
> import org.apache.hadoop.fs.Path;
>  
> public class HadoopTest {
>     public static int runRead(String[] args) throws Exception{
>     if (args.length < 3) {
>     System.err.println("HadoopTest hadoop_file_path 
> hadoop_user kerberos_cache");
>     return 1;
>     }
>     Path inputPath = new Path(args[0]);
>     Configuration conf = new Configuration();
>     URI defaultURI = FileSystem.getDefaultUri(conf);
>     
> conf.set("hadoop.security.kerberos.ticket.cache.path",args[2]);
>     FileSystem fs = 
> FileSystem.newInstance(defaultURI,conf,args[1]);
>     InputStream is = fs.open(inputPath);
>     byte[] buffer = new byte[4096];
>     int nr = is.read(buffer);
>     while (nr != -1)
>     {
>     System.out.write(buffer, 0, nr);
>     nr = is.read(buffer);
>     }
>     return 0;
>     }
>     public static void main( String[] args ) throws Exception {
>     int returnCode = HadoopTest.runRead(args);
>     System.exit(returnCode);
>     }
> }
>  
>  
>  
> [root@lstrost3 testhadoop]# pwd
> /testhadoop
>  
> [root@lstrost3 testhadoop]# ls
> HadoopTest.java
>  
> [root@lstrost3 testhadoop]# export CLASSPATH=`hadoop classpath --glob`:.
>  
> [root@lstrost3 testhadoop]# javac HadoopTest.java
>  
> [root@lstrost3 testhadoop]# java HadoopTest
> HadoopTest  hadoop_file_path  hadoop_user  kerberos_cache
>  
> [root@lstrost3 testhadoop]# java HadoopTest /loki/loki.file loki 
> /tmp/krb5cc_1006
> 18/09/27 23:23:20 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> 18/09/27 23:23:21 WARN shortcircuit.DomainSocketFactory: The short-circuit 
> local reads feature cannot be used because libhadoop cannot be loaded.
> Exception in thread "main" java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: *{color:#FF}No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt){color}*
>     at 
> {color:#FF}*org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:551)*{color}
>     at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:831)
>     at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
>     at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1393)
>     at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:1463)
>     at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:333)
>     at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:327)
>     at 
> 

[jira] [Updated] (HDDS-930) Multipart Upload: Abort multiupload request

2018-12-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-930:

Attachment: (was: HDDS-930.03.patch)

> Multipart Upload: Abort multiupload request
> ---
>
> Key: HDDS-930
> URL: https://issues.apache.org/jira/browse/HDDS-930
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-930.00.patch, HDDS-930.01.patch
>
>
> This Jira is to have abort multipart upload request, which cancels the 
> multipart upload.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadAbort.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-930) Multipart Upload: Abort multiupload request

2018-12-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-930:

Attachment: (was: HDDS-930.02.patch)

> Multipart Upload: Abort multiupload request
> ---
>
> Key: HDDS-930
> URL: https://issues.apache.org/jira/browse/HDDS-930
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-930.00.patch, HDDS-930.01.patch
>
>
> This Jira is to have abort multipart upload request, which cancels the 
> multipart upload.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadAbort.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-930) Multipart Upload: Abort multiupload request

2018-12-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-930:

Attachment: HDDS-930.00.patch

> Multipart Upload: Abort multiupload request
> ---
>
> Key: HDDS-930
> URL: https://issues.apache.org/jira/browse/HDDS-930
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-930.00.patch, HDDS-930.01.patch
>
>
> This Jira is to have abort multipart upload request, which cancels the 
> multipart upload.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadAbort.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-930) Multipart Upload: Abort multiupload request

2018-12-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-930:

Attachment: HDDS-930.02.patch
HDDS-930.03.patch

> Multipart Upload: Abort multiupload request
> ---
>
> Key: HDDS-930
> URL: https://issues.apache.org/jira/browse/HDDS-930
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-930.00.patch, HDDS-930.01.patch, HDDS-930.02.patch, 
> HDDS-930.03.patch
>
>
> This Jira is to have abort multipart upload request, which cancels the 
> multipart upload.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadAbort.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-948) MultipartUpload: S3 APi for Abort Multipart Upload

2018-12-20 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-948:
---

 Summary: MultipartUpload: S3 APi for Abort Multipart Upload
 Key: HDDS-948
 URL: https://issues.apache.org/jira/browse/HDDS-948
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Implement S3API for multipart upload.

https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadAbort.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-948) MultipartUpload: S3 APi for Abort Multipart Upload

2018-12-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-948:

Issue Type: Sub-task  (was: Bug)
Parent: HDDS-763

> MultipartUpload: S3 APi for Abort Multipart Upload
> --
>
> Key: HDDS-948
> URL: https://issues.apache.org/jira/browse/HDDS-948
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Implement S3API for multipart upload.
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadAbort.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-930) Multipart Upload: Abort multiupload request

2018-12-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-930:

Status: Patch Available  (was: In Progress)

HDDS-930.00 has only content required for this jira, but it need to be applied 
on top of HDDS-916. To make jenkins run attached HDDS-930.01 which contains 
both HDDS-916 and   HDDS-930.

 

> Multipart Upload: Abort multiupload request
> ---
>
> Key: HDDS-930
> URL: https://issues.apache.org/jira/browse/HDDS-930
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-930.00.patch, HDDS-930.01.patch
>
>
> This Jira is to have abort multipart upload request, which cancels the 
> multipart upload.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadAbort.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-930) Multipart Upload: Abort multiupload request

2018-12-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-930:

Target Version/s: 0.4.0

> Multipart Upload: Abort multiupload request
> ---
>
> Key: HDDS-930
> URL: https://issues.apache.org/jira/browse/HDDS-930
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-930.00.patch, HDDS-930.01.patch
>
>
> This Jira is to have abort multipart upload request, which cancels the 
> multipart upload.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadAbort.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-930) Multipart Upload: Abort multiupload request

2018-12-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-930:

Attachment: HDDS-930.01.patch

> Multipart Upload: Abort multiupload request
> ---
>
> Key: HDDS-930
> URL: https://issues.apache.org/jira/browse/HDDS-930
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-930.00.patch, HDDS-930.01.patch
>
>
> This Jira is to have abort multipart upload request, which cancels the 
> multipart upload.
> [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadAbort.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-597) Ratis: Support secure gRPC endpoint with mTLS for Ratis

2018-12-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726240#comment-16726240
 ] 

Hadoop QA commented on HDDS-597:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
47s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} HDDS-4 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
33s{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} root: The patch generated 0 new + 0 unchanged - 1 
fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
18s{color} | {color:red} root in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 15s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 19s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-597 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952579/HDDS-597-HDDS-4.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux be3d9062e3f9 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | HDDS-4 / 38861eb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1973/artifact/out/patch-mvninstall-root.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1973/artifact/out/patch-javadoc-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1973/artifact/out/patch-unit-hadoop-ozone.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1973/artifact/out/patch-unit-hadoop-hdds.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1973/testReport/ |
| Max. process+thread count | 81 (vs. ulimit of 1) |
| modules | C: hadoop-hdds hadoop-hdds/client hadoop-hdds/common 
hadoop-hdds/container-service hadoop-hdds/server-scm hadoop-ozone 
hadoop-ozone/integration-test U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1973/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ratis: Support secure gRPC endpoint with mTLS for Ratis
> ---
>
> Key: HDDS-597
> URL: https://issues.apache.org/jira/browse/HDDS-597
> 

[jira] [Created] (HDDS-947) Implement OzoneManager State Machine

2018-12-20 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-947:
---

 Summary: Implement OzoneManager State Machine
 Key: HDDS-947
 URL: https://issues.apache.org/jira/browse/HDDS-947
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


OM Ratis server would call OM State Machine to apply the committed 
transactions. The State Machine processes the transaction and updates the state 
of OzoneManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-597) Ratis: Support secure gRPC endpoint with mTLS for Ratis

2018-12-20 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-597:

Attachment: HDDS-597-HDDS-4.001.patch

> Ratis: Support secure gRPC endpoint with mTLS for Ratis
> ---
>
> Key: HDDS-597
> URL: https://issues.apache.org/jira/browse/HDDS-597
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-597-HDDS-4.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-597) Ratis: Support secure gRPC endpoint with mTLS for Ratis

2018-12-20 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726227#comment-16726227
 ] 

Xiaoyu Yao commented on HDDS-597:
-

Attach a patch that has dependency on Ratis change and new release. 

 

 

> Ratis: Support secure gRPC endpoint with mTLS for Ratis
> ---
>
> Key: HDDS-597
> URL: https://issues.apache.org/jira/browse/HDDS-597
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-597-HDDS-4.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-597) Ratis: Support secure gRPC endpoint with mTLS for Ratis

2018-12-20 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-597:

Status: Patch Available  (was: In Progress)

> Ratis: Support secure gRPC endpoint with mTLS for Ratis
> ---
>
> Key: HDDS-597
> URL: https://issues.apache.org/jira/browse/HDDS-597
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-597-HDDS-4.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-881) Encapsulate all client to OM requests into one request message

2018-12-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-881:

Fix Version/s: 0.5.0

> Encapsulate all client to OM requests into one request message
> --
>
> Key: HDDS-881
> URL: https://issues.apache.org/jira/browse/HDDS-881
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.5.0
>
> Attachments: HDDS-881.001.patch, HDDS-881.002.patch, 
> HDDS-881.003.patch, HDDS-881.004.patch, HDDS-881.005.patch, 
> HDDS-881.006.patch, HDDS-881.007.patch, HDDS-881.008.patch, HDDS-881.009.patch
>
>
> When OM receives a request, we need to transform the request into Ratis 
> server compatible request so that the OM's Ratis server can process that 
> request. 
> In this Jira, we just add the support to convert a client request received by 
> OM into a RaftClient request. This transformed request would later be passed 
> onto the OM's Ratis server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-915) Submit client request to OM Ratis server

2018-12-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726123#comment-16726123
 ] 

Hadoop QA commented on HDDS-915:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} root: The patch generated 7 new + 0 unchanged - 
0 fixed = 7 total (was 0) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 21 line(s) with tabs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 40s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
22s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
18s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
|   | hadoop.ozone.container.TestContainerReplication |
|   | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-915 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952429/HDDS-915.002.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 3252a63c39be 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 13d3f99 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1971/artifact/out/diff-checkstyle-root.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1971/artifact/out/whitespace-tabs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1971/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1971/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1971/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 1057 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1971/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Submit client request to OM Ratis server
> 

[jira] [Commented] (HDDS-916) MultipartUpload: Complete Multipart upload request

2018-12-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726119#comment-16726119
 ] 

Hadoop QA commented on HDDS-916:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 30s{color} | {color:orange} root: The patch generated 2 new + 2 unchanged - 
0 fixed = 4 total (was 2) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 32m 
26s{color} | {color:green} hadoop-ozone in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
30s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-916 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952566/HDDS-916.05.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux e1c3795bc2bb 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 13d3f99 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1972/artifact/out/diff-checkstyle-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1972/testReport/ |
| Max. process+thread count | 1098 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1972/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> MultipartUpload: Complete Multipart upload request
> --
>
> Key: HDDS-916
> URL: https://issues.apache.org/jira/browse/HDDS-916
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-916.01.patch, HDDS-916.02.patch, HDDS-916.03.patch, 
> HDDS-916.04.patch, HDDS-916.05.patch
>
>
> This Jira is to support completeMultipartUpload request 

[jira] [Commented] (HDDS-916) MultipartUpload: Complete Multipart upload request

2018-12-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726104#comment-16726104
 ] 

Hadoop QA commented on HDDS-916:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} root: The patch generated 2 new + 2 unchanged - 
0 fixed = 4 total (was 2) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 25m 57s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
42s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-916 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952565/HDDS-916.04.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 1ad939206945 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 13d3f99 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1970/artifact/out/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1970/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1970/testReport/ |
| Max. process+thread count | 1090 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1970/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> MultipartUpload: Complete Multipart upload request
> --
>
> Key: HDDS-916
> URL: https://issues.apache.org/jira/browse/HDDS-916
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> 

[jira] [Commented] (HDDS-881) Encapsulate all client to OM requests into one request message

2018-12-20 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726079#comment-16726079
 ] 

Hanisha Koneru commented on HDDS-881:
-

Thank you all for the reviews and [~bharatviswa] for committing the patch.

> Encapsulate all client to OM requests into one request message
> --
>
> Key: HDDS-881
> URL: https://issues.apache.org/jira/browse/HDDS-881
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-881.001.patch, HDDS-881.002.patch, 
> HDDS-881.003.patch, HDDS-881.004.patch, HDDS-881.005.patch, 
> HDDS-881.006.patch, HDDS-881.007.patch, HDDS-881.008.patch, HDDS-881.009.patch
>
>
> When OM receives a request, we need to transform the request into Ratis 
> server compatible request so that the OM's Ratis server can process that 
> request. 
> In this Jira, we just add the support to convert a client request received by 
> OM into a RaftClient request. This transformed request would later be passed 
> onto the OM's Ratis server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-915) Submit client request to OM Ratis server

2018-12-20 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-915:

Description: OM ratis client should submit incoming client request to the 
ratis server. OM should distinguish read requests and write requests.  (was: 
After transforming a client request into Ratis request (HDDS-881), the request 
should be submitted to the ratis server. )

> Submit client request to OM Ratis server
> 
>
> Key: HDDS-915
> URL: https://issues.apache.org/jira/browse/HDDS-915
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-915.001.patch, HDDS-915.002.patch
>
>
> OM ratis client should submit incoming client request to the ratis server. OM 
> should distinguish read requests and write requests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-915) Submit client request to OM Ratis server

2018-12-20 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-915:

Status: Patch Available  (was: Open)

> Submit client request to OM Ratis server
> 
>
> Key: HDDS-915
> URL: https://issues.apache.org/jira/browse/HDDS-915
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-915.001.patch, HDDS-915.002.patch
>
>
> After transforming a client request into Ratis request (HDDS-881), the 
> request should be submitted to the ratis server. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14161) RBF: Throw RetriableException instead of IOException so that client can retry when can not get connection

2018-12-20 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726077#comment-16726077
 ] 

Íñigo Goiri commented on HDFS-14161:


Thanks [~ferhui] for the patch.
Coincidentally, yesterday I found an issue with throwing a {{StandbyException}}.
In {{RouterRpcClient#420}}, we report all {{StandbyException}} as the Namenode 
was in Standby while this is not true in the overload case anymore.
Will the {{RetriableException}} have the same effect of it trying into a 
different Router?
If so, we may want to make both {{RetriableException}}.
Another option is to improve {{RouterRpcClient#420}} to report the exception or 
verify if it's a real {{StandbyException}} or not.

> RBF: Throw RetriableException instead of IOException so that client can retry 
> when can not get connection
> -
>
> Key: HDFS-14161
> URL: https://issues.apache.org/jira/browse/HDFS-14161
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14161-HDFS-13891.001.patch, 
> HDFS-14161-HDFS-13891.002.patch, HDFS-14161.001.patch
>
>
> Hive Client may hang when get IOException, stack follows
> {code:java}
> Exception in thread "Thread-150" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot get a 
> connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:554)
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:74)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot 
> get a connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1503)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1441)
>   at 
> 

[jira] [Updated] (HDDS-916) MultipartUpload: Complete Multipart upload request

2018-12-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-916:

Attachment: HDDS-916.05.patch

> MultipartUpload: Complete Multipart upload request
> --
>
> Key: HDDS-916
> URL: https://issues.apache.org/jira/browse/HDDS-916
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-916.01.patch, HDDS-916.02.patch, HDDS-916.03.patch, 
> HDDS-916.04.patch, HDDS-916.05.patch
>
>
> This Jira is to support completeMultipartUpload request in ozone.
> This request will stitch all the parts and add the entry into the keyTable, 
> to make it visible in ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14156) RBF : RollEdit command fail with router

2018-12-20 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726073#comment-16726073
 ] 

Íñigo Goiri commented on HDFS-14156:


I meant cast and not class.
So, yes, let's go with the cast the way [~ayushtkn] proposes.

Then, for the unit test in addition to a simple one in {{TestRouterRpc}}, I was 
proposing one that goes through the {{ClientProtocol}} methods and executes all 
of them.
Obviously, we will have a list of unimplemented ones.

> RBF : RollEdit command fail with router
> ---
>
> Key: HDFS-14156
> URL: https://issues.apache.org/jira/browse/HDFS-14156
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Shubham Dewan
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14156.001.patch
>
>
> {noformat}
> bin> ./hdfs dfsadmin -rollEdits
> rollEdits: Cannot cast java.lang.Long to long
> bin>
> {noformat}
> Trace :-
> {noformat}
> org.apache.hadoop.ipc.RemoteException(java.lang.ClassCastException): Cannot 
> cast java.lang.Long to long
> at java.lang.Class.cast(Class.java:3369)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1085)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:982)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.rollEdits(RouterClientProtocol.java:900)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.rollEdits(RouterRpcServer.java:862)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rollEdits(ClientNamenodeProtocolServerSideTranslatorPB.java:899)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
> at org.apache.hadoop.ipc.Client.call(Client.java:1466)
> at org.apache.hadoop.ipc.Client.call(Client.java:1376)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy11.rollEdits(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.rollEdits(ClientNamenodeProtocolTranslatorPB.java:804)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy12.rollEdits(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.rollEdits(DFSClient.java:2350)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.rollEdits(DistributedFileSystem.java:1550)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.rollEdits(DFSAdmin.java:850)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2353)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2568)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HDFS-14156) RBF : RollEdit command fail with router

2018-12-20 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725600#comment-16725600
 ] 

Íñigo Goiri edited comment on HDFS-14156 at 12/20/18 5:37 PM:
--

If possible, yes, we should do a regular cast. 


was (Author: elgoiri):
If possible, yes, we should do a regular class. 

> RBF : RollEdit command fail with router
> ---
>
> Key: HDFS-14156
> URL: https://issues.apache.org/jira/browse/HDFS-14156
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Shubham Dewan
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14156.001.patch
>
>
> {noformat}
> bin> ./hdfs dfsadmin -rollEdits
> rollEdits: Cannot cast java.lang.Long to long
> bin>
> {noformat}
> Trace :-
> {noformat}
> org.apache.hadoop.ipc.RemoteException(java.lang.ClassCastException): Cannot 
> cast java.lang.Long to long
> at java.lang.Class.cast(Class.java:3369)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1085)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:982)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.rollEdits(RouterClientProtocol.java:900)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.rollEdits(RouterRpcServer.java:862)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rollEdits(ClientNamenodeProtocolServerSideTranslatorPB.java:899)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
> at org.apache.hadoop.ipc.Client.call(Client.java:1466)
> at org.apache.hadoop.ipc.Client.call(Client.java:1376)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy11.rollEdits(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.rollEdits(ClientNamenodeProtocolTranslatorPB.java:804)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy12.rollEdits(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.rollEdits(DFSClient.java:2350)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.rollEdits(DistributedFileSystem.java:1550)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.rollEdits(DFSAdmin.java:850)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2353)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2568)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (HDDS-88) Create separate message structure to represent ports in DatanodeDetails

2018-12-20 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-88?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726063#comment-16726063
 ] 

Hudson commented on HDDS-88:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15646 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15646/])
HDDS-881.009. Encapsulate all client to OM requests into one request (bharat: 
rev 13d3f99b37c505a2b7366f8a1a5c9dc11e5dd7b1)
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocolPB/OzoneManagerProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/hdfs/server/datanode/ObjectStoreHandler.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisServer.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java


> Create separate message structure to represent ports in DatanodeDetails 
> 
>
> Key: HDDS-88
> URL: https://issues.apache.org/jira/browse/HDDS-88
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-88.000.patch, HDDS-88.001.patch, HDDS-88.002.patch
>
>
> DataNode uses many ports which have to be set in DatanodeDetails and sent to 
> SCM. This port details can be extracted into a separate protobuf message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-916) MultipartUpload: Complete Multipart upload request

2018-12-20 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726060#comment-16726060
 ] 

Bharat Viswanadham commented on HDDS-916:
-

Rebased patch on top of the latest trunk.

> MultipartUpload: Complete Multipart upload request
> --
>
> Key: HDDS-916
> URL: https://issues.apache.org/jira/browse/HDDS-916
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-916.01.patch, HDDS-916.02.patch, HDDS-916.03.patch, 
> HDDS-916.04.patch
>
>
> This Jira is to support completeMultipartUpload request in ozone.
> This request will stitch all the parts and add the entry into the keyTable, 
> to make it visible in ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-916) MultipartUpload: Complete Multipart upload request

2018-12-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-916:

Attachment: HDDS-916.04.patch

> MultipartUpload: Complete Multipart upload request
> --
>
> Key: HDDS-916
> URL: https://issues.apache.org/jira/browse/HDDS-916
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-916.01.patch, HDDS-916.02.patch, HDDS-916.03.patch, 
> HDDS-916.04.patch
>
>
> This Jira is to support completeMultipartUpload request in ozone.
> This request will stitch all the parts and add the entry into the keyTable, 
> to make it visible in ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-881) Encapsulate all client to OM requests into one request message

2018-12-20 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726044#comment-16726044
 ] 

Bharat Viswanadham commented on HDDS-881:
-

Thank You [~arpitagarwal] and [~msingh] for the review and [~hanishakoneru] for 
the contribution.

I am going ahead and committing this, as after this I need to rebase my patch 
HDDS-916 and HDDS-930 further depends on   HDDS-916. So, to unblock the work I 
am committing this jira.

 

 

> Encapsulate all client to OM requests into one request message
> --
>
> Key: HDDS-881
> URL: https://issues.apache.org/jira/browse/HDDS-881
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-881.001.patch, HDDS-881.002.patch, 
> HDDS-881.003.patch, HDDS-881.004.patch, HDDS-881.005.patch, 
> HDDS-881.006.patch, HDDS-881.007.patch, HDDS-881.008.patch, HDDS-881.009.patch
>
>
> When OM receives a request, we need to transform the request into Ratis 
> server compatible request so that the OM's Ratis server can process that 
> request. 
> In this Jira, we just add the support to convert a client request received by 
> OM into a RaftClient request. This transformed request would later be passed 
> onto the OM's Ratis server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-881) Encapsulate all client to OM requests into one request message

2018-12-20 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16726044#comment-16726044
 ] 

Bharat Viswanadham edited comment on HDDS-881 at 12/20/18 5:12 PM:
---

Thank You [~arpitagarwal] and [~msingh] for the review and [~hanishakoneru] for 
the contribution.

I am going ahead and committing this, as after this I need to rebase my patch 
HDDS-916 and HDDS-930 further depends on   HDDS-916 and also all reviewers are 
+1 with the patch. So, to unblock the work I am committing this jira.

 

 


was (Author: bharatviswa):
Thank You [~arpitagarwal] and [~msingh] for the review and [~hanishakoneru] for 
the contribution.

I am going ahead and committing this, as after this I need to rebase my patch 
HDDS-916 and HDDS-930 further depends on   HDDS-916. So, to unblock the work I 
am committing this jira.

 

 

> Encapsulate all client to OM requests into one request message
> --
>
> Key: HDDS-881
> URL: https://issues.apache.org/jira/browse/HDDS-881
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-881.001.patch, HDDS-881.002.patch, 
> HDDS-881.003.patch, HDDS-881.004.patch, HDDS-881.005.patch, 
> HDDS-881.006.patch, HDDS-881.007.patch, HDDS-881.008.patch, HDDS-881.009.patch
>
>
> When OM receives a request, we need to transform the request into Ratis 
> server compatible request so that the OM's Ratis server can process that 
> request. 
> In this Jira, we just add the support to convert a client request received by 
> OM into a RaftClient request. This transformed request would later be passed 
> onto the OM's Ratis server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-881) Encapsulate all client to OM requests into one request message

2018-12-20 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-881:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

I have committed this to trunk.

> Encapsulate all client to OM requests into one request message
> --
>
> Key: HDDS-881
> URL: https://issues.apache.org/jira/browse/HDDS-881
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-881.001.patch, HDDS-881.002.patch, 
> HDDS-881.003.patch, HDDS-881.004.patch, HDDS-881.005.patch, 
> HDDS-881.006.patch, HDDS-881.007.patch, HDDS-881.008.patch, HDDS-881.009.patch
>
>
> When OM receives a request, we need to transform the request into Ratis 
> server compatible request so that the OM's Ratis server can process that 
> request. 
> In this Jira, we just add the support to convert a client request received by 
> OM into a RaftClient request. This transformed request would later be passed 
> onto the OM's Ratis server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-896) Handle over replicated containers in SCM

2018-12-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725947#comment-16725947
 ] 

Hadoop QA commented on HDDS-896:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m  2s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
30s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.web.client.TestKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-896 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952546/HDDS-896.001.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux de91d00d76e2 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / ea621fa |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1969/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1969/testReport/ |
| Max. process+thread count | 1062 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/container-service hadoop-hdds/server-scm 
hadoop-ozone/integration-test U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1969/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Handle over replicated containers in SCM
> 
>
> Key: HDDS-896
> URL: https://issues.apache.org/jira/browse/HDDS-896
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-896.000.patch, HDDS-896.001.patch
>
>
> When SCM detects that a container is over-replicated, it has to delete some 
> replicas to bring the number of replicas to match 

[jira] [Commented] (HDDS-896) Handle over replicated containers in SCM

2018-12-20 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725905#comment-16725905
 ] 

Nanda kumar commented on HDDS-896:
--

Thanks [~msingh] for the review, updated the patch.

> Handle over replicated containers in SCM
> 
>
> Key: HDDS-896
> URL: https://issues.apache.org/jira/browse/HDDS-896
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-896.000.patch, HDDS-896.001.patch
>
>
> When SCM detects that a container is over-replicated, it has to delete some 
> replicas to bring the number of replicas to match the required value. If the 
> container is in QUASI_CLOSED state, we should check the {{originNodeId}} 
> field while choosing the replica to delete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-896) Handle over replicated containers in SCM

2018-12-20 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-896:
-
Attachment: HDDS-896.001.patch

> Handle over replicated containers in SCM
> 
>
> Key: HDDS-896
> URL: https://issues.apache.org/jira/browse/HDDS-896
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-896.000.patch, HDDS-896.001.patch
>
>
> When SCM detects that a container is over-replicated, it has to delete some 
> replicas to bring the number of replicas to match the required value. If the 
> container is in QUASI_CLOSED state, we should check the {{originNodeId}} 
> field while choosing the replica to delete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14161) RBF: Throw RetriableException instead of IOException so that client can retry when can not get connection

2018-12-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725896#comment-16725896
 ] 

Hadoop QA commented on HDFS-14161:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
29s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
14s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14161 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952527/HDFS-14161-HDFS-13891.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ab3868b6a261 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / c9ebaf2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25835/testReport/ |
| Max. process+thread count | 952 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25835/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Throw RetriableException instead of IOException so that client can retry 
> when can not get connection
> 

[jira] [Updated] (HDFS-14161) RBF: Throw RetriableException instead of IOException so that client can retry when can not get connection

2018-12-20 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14161:
---
Attachment: HDFS-14161-HDFS-13891.002.patch

> RBF: Throw RetriableException instead of IOException so that client can retry 
> when can not get connection
> -
>
> Key: HDFS-14161
> URL: https://issues.apache.org/jira/browse/HDFS-14161
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14161-HDFS-13891.001.patch, 
> HDFS-14161-HDFS-13891.002.patch, HDFS-14161.001.patch
>
>
> Hive Client may hang when get IOException, stack follows
> {code:java}
> Exception in thread "Thread-150" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot get a 
> connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:554)
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:74)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot 
> get a connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1503)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1441)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>   at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:775)
>   at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> 

[jira] [Commented] (HDFS-14161) RBF: Throw RetriableException instead of IOException so that client can retry when can not get connection

2018-12-20 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725812#comment-16725812
 ] 

Fei Hui commented on HDFS-14161:


Upload v002 path, fix checkstyle

> RBF: Throw RetriableException instead of IOException so that client can retry 
> when can not get connection
> -
>
> Key: HDFS-14161
> URL: https://issues.apache.org/jira/browse/HDFS-14161
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14161-HDFS-13891.001.patch, 
> HDFS-14161-HDFS-13891.002.patch, HDFS-14161.001.patch
>
>
> Hive Client may hang when get IOException, stack follows
> {code:java}
> Exception in thread "Thread-150" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot get a 
> connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:554)
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:74)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot 
> get a connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1503)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1441)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>   at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:775)
>   at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> 

[jira] [Assigned] (HDFS-14163) Debug Admin Command Should Support Generic Options.

2018-12-20 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena reassigned HDFS-14163:
---

Assignee: Ayush Saxena

> Debug Admin Command Should Support Generic Options.
> ---
>
> Key: HDFS-14163
> URL: https://issues.apache.org/jira/browse/HDFS-14163
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Major
>
> {noformat}
> namenode> ./bin/hdfs debug -fs hdfs://hacluster recoverLease -path /dest/file2
> Usage: hdfs debug  [arguments]
> These commands are for advanced users only.
> Incorrect usages may result in data loss. Use at your own risk.
> verifyMeta -meta  [-block ]
> computeMeta -block  -out 
> recoverLease -path  [-retries ]
> namenode>
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14163) Debug Admin Command Should Support Generic Options.

2018-12-20 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14163:


 Summary: Debug Admin Command Should Support Generic Options.
 Key: HDFS-14163
 URL: https://issues.apache.org/jira/browse/HDFS-14163
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Harshakiran Reddy


{noformat}
namenode> ./bin/hdfs debug -fs hdfs://hacluster recoverLease -path /dest/file2
Usage: hdfs debug  [arguments]

These commands are for advanced users only.

Incorrect usages may result in data loss. Use at your own risk.

verifyMeta -meta  [-block ]
computeMeta -block  -out 
recoverLease -path  [-retries ]
namenode>
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14161) RBF: Throw RetriableException instead of IOException so that client can retry when can not get connection

2018-12-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725792#comment-16725792
 ] 

Hadoop QA commented on HDFS-14161:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
50s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m  
4s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14161 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952516/HDFS-14161-HDFS-13891.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d403cec83784 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / c9ebaf2 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25834/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25834/testReport/ |
| Max. process+thread count | 1233 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 

[jira] [Commented] (HDFS-14156) RBF : RollEdit command fail with router

2018-12-20 Thread Shubham Dewan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725732#comment-16725732
 ] 

Shubham Dewan commented on HDFS-14156:
--

[~elgoiri] didn't get you 

> RBF : RollEdit command fail with router
> ---
>
> Key: HDFS-14156
> URL: https://issues.apache.org/jira/browse/HDFS-14156
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Shubham Dewan
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14156.001.patch
>
>
> {noformat}
> bin> ./hdfs dfsadmin -rollEdits
> rollEdits: Cannot cast java.lang.Long to long
> bin>
> {noformat}
> Trace :-
> {noformat}
> org.apache.hadoop.ipc.RemoteException(java.lang.ClassCastException): Cannot 
> cast java.lang.Long to long
> at java.lang.Class.cast(Class.java:3369)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1085)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:982)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.rollEdits(RouterClientProtocol.java:900)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.rollEdits(RouterRpcServer.java:862)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rollEdits(ClientNamenodeProtocolServerSideTranslatorPB.java:899)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
> at org.apache.hadoop.ipc.Client.call(Client.java:1466)
> at org.apache.hadoop.ipc.Client.call(Client.java:1376)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy11.rollEdits(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.rollEdits(ClientNamenodeProtocolTranslatorPB.java:804)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy12.rollEdits(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.rollEdits(DFSClient.java:2350)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.rollEdits(DistributedFileSystem.java:1550)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.rollEdits(DFSAdmin.java:850)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2353)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2568)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14161) RBF: Throw RetriableException instead of IOException so that client can retry when can not get connection

2018-12-20 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14161:
---
Attachment: HDFS-14161-HDFS-13891.001.patch

> RBF: Throw RetriableException instead of IOException so that client can retry 
> when can not get connection
> -
>
> Key: HDFS-14161
> URL: https://issues.apache.org/jira/browse/HDFS-14161
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14161-HDFS-13891.001.patch, HDFS-14161.001.patch
>
>
> Hive Client may hang when get IOException, stack follows
> {code:java}
> Exception in thread "Thread-150" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot get a 
> connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:554)
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:74)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot 
> get a connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1503)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1441)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>   at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:775)
>   at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:253)
>   at 
> 

[jira] [Commented] (HDFS-14161) RBF: Throw RetriableException instead of IOException so that client can retry when can not get connection

2018-12-20 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725743#comment-16725743
 ] 

Fei Hui commented on HDFS-14161:


[~elgoiri] Maybe throw StandbyException is reasonable. Add a test case and 
rebase code on branch HDFS-13891.
Test cases of TestRouterClientRejectOverload aim to test ThreadPoolExecutor 
from RouterRpcClient. This issue is different, it occurs when the 
ConnectionManager returns a null Connection.
 

> RBF: Throw RetriableException instead of IOException so that client can retry 
> when can not get connection
> -
>
> Key: HDFS-14161
> URL: https://issues.apache.org/jira/browse/HDFS-14161
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14161.001.patch
>
>
> Hive Client may hang when get IOException, stack follows
> {code:java}
> Exception in thread "Thread-150" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot get a 
> connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:554)
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:74)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Cannot 
> get a connection to bigdata-nn20.g01:8020
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.getConnection(RouterRpcClient.java:262)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:380)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:752)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1503)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1441)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>   at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:775)
>   at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>   at 
> 

[jira] [Comment Edited] (HDFS-14156) RBF : RollEdit command fail with router

2018-12-20 Thread Shubham Dewan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725732#comment-16725732
 ] 

Shubham Dewan edited comment on HDFS-14156 at 12/20/18 10:09 AM:
-

[~elgoiri] didn't get you 

Should we go with the changes as done in the already uploaded patch 

Or go with the changes suggested by [~ayushtkn]?


was (Author: shubham.dewan):
[~elgoiri] didn't get you 

> RBF : RollEdit command fail with router
> ---
>
> Key: HDFS-14156
> URL: https://issues.apache.org/jira/browse/HDFS-14156
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Harshakiran Reddy
>Assignee: Shubham Dewan
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14156.001.patch
>
>
> {noformat}
> bin> ./hdfs dfsadmin -rollEdits
> rollEdits: Cannot cast java.lang.Long to long
> bin>
> {noformat}
> Trace :-
> {noformat}
> org.apache.hadoop.ipc.RemoteException(java.lang.ClassCastException): Cannot 
> cast java.lang.Long to long
> at java.lang.Class.cast(Class.java:3369)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:1085)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeConcurrent(RouterRpcClient.java:982)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.rollEdits(RouterClientProtocol.java:900)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.rollEdits(RouterRpcServer.java:862)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rollEdits(ClientNamenodeProtocolServerSideTranslatorPB.java:899)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:824)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2684)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1520)
> at org.apache.hadoop.ipc.Client.call(Client.java:1466)
> at org.apache.hadoop.ipc.Client.call(Client.java:1376)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy11.rollEdits(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.rollEdits(ClientNamenodeProtocolTranslatorPB.java:804)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy12.rollEdits(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.rollEdits(DFSClient.java:2350)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.rollEdits(DistributedFileSystem.java:1550)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.rollEdits(DFSAdmin.java:850)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2353)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2568)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (HDFS-14140) JournalNodeSyncer authentication is failing in secure cluster

2018-12-20 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725693#comment-16725693
 ] 

Surendra Singh Lilhore commented on HDFS-14140:
---

[~hanishakoneru], Can you review ?

> JournalNodeSyncer authentication is failing in secure cluster
> -
>
> Key: HDFS-14140
> URL: https://issues.apache.org/jira/browse/HDFS-14140
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, security
>Affects Versions: 3.0.0, 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-14140.001.patch
>
>
> {noformat}
> 2018-12-11 17:25:48,407 | ERROR | 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer$$Lambda$31/208830412@42bcafb2
>  | Download of Edit Log file for Syncing failed. Deleting temp file: 
> //edits.sync/edits_0049016-0049023 | 
> JournalNodeSyncer.java:464
> java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> Error while authenticating with endpoint: 
> https://xx/getJournal?jid=hacluster=49016=-64%3A716441797%3A1544438882625%3Amyhacluster=false
>   at org.apache.hadoop.hdfs.server.common.Util.doGetUrl(Util.java:160)
>   at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.downloadMissingLogSegment(JournalNodeSyncer.java:460)
>   at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.getMissingLogSegments(JournalNodeSyncer.java:360)
>   at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.syncWithJournalAtIndex(JournalNodeSyncer.java:262)
>   at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.syncJournals(JournalNodeSyncer.java:230)
>   at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer.lambda$startSyncJournalsDaemon$0(JournalNodeSyncer.java:190)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14153) [SPS] : Add Support for Storage Policy Satisfier in WEBHDFS

2018-12-20 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14153:

Attachment: HDFS-14153-02.patch

> [SPS] : Add Support for Storage Policy Satisfier in WEBHDFS
> ---
>
> Key: HDFS-14153
> URL: https://issues.apache.org/jira/browse/HDFS-14153
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14153-01.patch, HDFS-14153-02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-896) Handle over replicated containers in SCM

2018-12-20 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725662#comment-16725662
 ] 

Mukul Kumar Singh commented on HDDS-896:


Thanks for working on this [~nandakumar131]. The patch looks good to me. Some 
comments as following.

1) KeyValueHandler.java: 882, lets release the lock first and then process the 
delete container out of the try catch block.

> Handle over replicated containers in SCM
> 
>
> Key: HDDS-896
> URL: https://issues.apache.org/jira/browse/HDDS-896
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-896.000.patch
>
>
> When SCM detects that a container is over-replicated, it has to delete some 
> replicas to bring the number of replicas to match the required value. If the 
> container is in QUASI_CLOSED state, we should check the {{originNodeId}} 
> field while choosing the replica to delete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-881) Encapsulate all client to OM requests into one request message

2018-12-20 Thread Mukul Kumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725661#comment-16725661
 ] 

Mukul Kumar Singh commented on HDDS-881:


Thanks for updating the patch [~hanishakoneru].
+1 for the v9 patch. 

> Encapsulate all client to OM requests into one request message
> --
>
> Key: HDDS-881
> URL: https://issues.apache.org/jira/browse/HDDS-881
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-881.001.patch, HDDS-881.002.patch, 
> HDDS-881.003.patch, HDDS-881.004.patch, HDDS-881.005.patch, 
> HDDS-881.006.patch, HDDS-881.007.patch, HDDS-881.008.patch, HDDS-881.009.patch
>
>
> When OM receives a request, we need to transform the request into Ratis 
> server compatible request so that the OM's Ratis server can process that 
> request. 
> In this Jira, we just add the support to convert a client request received by 
> OM into a RaftClient request. This transformed request would later be passed 
> onto the OM's Ratis server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org