[jira] [Commented] (HDFS-14314) fullBlockReportLeaseId should be reset after registering to NN

2019-03-01 Thread star (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782284#comment-16782284
 ] 

star commented on HDFS-14314:
-

[~jojochuang], thanks for pointing out the mistake. new patch uploaded. Now 
test will fail without the fix. 

> fullBlockReportLeaseId should be reset after registering to NN
> --
>
> Key: HDFS-14314
> URL: https://issues.apache.org/jira/browse/HDFS-14314
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.8.4
> Environment:  
>  
>  
>Reporter: star
>Assignee: star
>Priority: Critical
> Fix For: 2.8.4
>
> Attachments: HDFS-14314-trunk.001.patch, HDFS-14314-trunk.001.patch, 
> HDFS-14314-trunk.002.patch, HDFS-14314-trunk.003.patch, 
> HDFS-14314-trunk.004.patch, HDFS-14314-trunk.005.patch, 
> HDFS-14314-trunk.006.patch, HDFS-14314.0.patch, HDFS-14314.2.patch, 
> HDFS-14314.patch
>
>
>   since HDFS-7923 ,to rate-limit DN block report, DN will ask for a full 
> block lease id from active NN before sending full block to NN. Then DN will 
> send full block report together with lease id. If the lease id is invalid, NN 
> will reject the full block report and log "not in the pending set".
>   In a case when DN is doing full block reporting while NN is restarted. 
> It happens that DN will later send a full block report with lease id 
> ,acquired from previous NN instance, which is invalid to the new NN instance. 
> Though DN recognized the new NN instance by heartbeat and reregister itself, 
> it did not reset the lease id from previous instance.
>   The issuse may cause DNs to temporarily go dead, making it unsafe to 
> restart NN especially in hadoop cluster which has large amount of DNs. 
> HDFS-12914 reported the issue  without any clues why it occurred and remain 
> unsolved.
>    To make it clear, look at code below. We take it from method 
> offerService of class BPServiceActor. We eliminate some code to focus on 
> current issue. fullBlockReportLeaseId is a local variable to hold lease id 
> from NN. Exceptions will occur at blockReport call when NN restarting, which 
> will be caught by catch block in while loop. Thus fullBlockReportLeaseId will 
> not be set to 0. After NN restarted, DN will send full block report which 
> will be rejected by the new NN instance. DN will never send full block report 
> until the next full block report schedule, about an hour later.
>   Solution is simple, just reset fullBlockReportLeaseId to 0 after any 
> exception or after registering to NN. Thus it will ask for a valid 
> fullBlockReportLeaseId from new NN instance.
> {code:java}
> private void offerService() throws Exception {
>   long fullBlockReportLeaseId = 0;
>   //
>   // Now loop for a long time
>   //
>   while (shouldRun()) {
> try {
>   final long startTime = scheduler.monotonicNow();
>   //
>   // Every so often, send heartbeat or block-report
>   //
>   final boolean sendHeartbeat = scheduler.isHeartbeatDue(startTime);
>   HeartbeatResponse resp = null;
>   if (sendHeartbeat) {
>   
> boolean requestBlockReportLease = (fullBlockReportLeaseId == 0) &&
> scheduler.isBlockReportDue(startTime);
> scheduler.scheduleNextHeartbeat();
> if (!dn.areHeartbeatsDisabledForTests()) {
>   resp = sendHeartBeat(requestBlockReportLease);
>   assert resp != null;
>   if (resp.getFullBlockReportLeaseId() != 0) {
> if (fullBlockReportLeaseId != 0) {
>   LOG.warn(nnAddr + " sent back a full block report lease " +
>   "ID of 0x" +
>   Long.toHexString(resp.getFullBlockReportLeaseId()) +
>   ", but we already have a lease ID of 0x" +
>   Long.toHexString(fullBlockReportLeaseId) + ". " +
>   "Overwriting old lease ID.");
> }
> fullBlockReportLeaseId = resp.getFullBlockReportLeaseId();
>   }
>  
> }
>   }
>
>  
>   if ((fullBlockReportLeaseId != 0) || forceFullBr) {
> //Exception occurred here when NN restarting
> cmds = blockReport(fullBlockReportLeaseId);
> fullBlockReportLeaseId = 0;
>   }
>   
> } catch(RemoteException re) {
>   
>   } // while (shouldRun())
> } // offerService{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14314) fullBlockReportLeaseId should be reset after registering to NN

2019-03-01 Thread star (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

star updated HDFS-14314:

Attachment: HDFS-14314-trunk.006.patch

> fullBlockReportLeaseId should be reset after registering to NN
> --
>
> Key: HDFS-14314
> URL: https://issues.apache.org/jira/browse/HDFS-14314
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.8.4
> Environment:  
>  
>  
>Reporter: star
>Assignee: star
>Priority: Critical
> Fix For: 2.8.4
>
> Attachments: HDFS-14314-trunk.001.patch, HDFS-14314-trunk.001.patch, 
> HDFS-14314-trunk.002.patch, HDFS-14314-trunk.003.patch, 
> HDFS-14314-trunk.004.patch, HDFS-14314-trunk.005.patch, 
> HDFS-14314-trunk.006.patch, HDFS-14314.0.patch, HDFS-14314.2.patch, 
> HDFS-14314.patch
>
>
>   since HDFS-7923 ,to rate-limit DN block report, DN will ask for a full 
> block lease id from active NN before sending full block to NN. Then DN will 
> send full block report together with lease id. If the lease id is invalid, NN 
> will reject the full block report and log "not in the pending set".
>   In a case when DN is doing full block reporting while NN is restarted. 
> It happens that DN will later send a full block report with lease id 
> ,acquired from previous NN instance, which is invalid to the new NN instance. 
> Though DN recognized the new NN instance by heartbeat and reregister itself, 
> it did not reset the lease id from previous instance.
>   The issuse may cause DNs to temporarily go dead, making it unsafe to 
> restart NN especially in hadoop cluster which has large amount of DNs. 
> HDFS-12914 reported the issue  without any clues why it occurred and remain 
> unsolved.
>    To make it clear, look at code below. We take it from method 
> offerService of class BPServiceActor. We eliminate some code to focus on 
> current issue. fullBlockReportLeaseId is a local variable to hold lease id 
> from NN. Exceptions will occur at blockReport call when NN restarting, which 
> will be caught by catch block in while loop. Thus fullBlockReportLeaseId will 
> not be set to 0. After NN restarted, DN will send full block report which 
> will be rejected by the new NN instance. DN will never send full block report 
> until the next full block report schedule, about an hour later.
>   Solution is simple, just reset fullBlockReportLeaseId to 0 after any 
> exception or after registering to NN. Thus it will ask for a valid 
> fullBlockReportLeaseId from new NN instance.
> {code:java}
> private void offerService() throws Exception {
>   long fullBlockReportLeaseId = 0;
>   //
>   // Now loop for a long time
>   //
>   while (shouldRun()) {
> try {
>   final long startTime = scheduler.monotonicNow();
>   //
>   // Every so often, send heartbeat or block-report
>   //
>   final boolean sendHeartbeat = scheduler.isHeartbeatDue(startTime);
>   HeartbeatResponse resp = null;
>   if (sendHeartbeat) {
>   
> boolean requestBlockReportLease = (fullBlockReportLeaseId == 0) &&
> scheduler.isBlockReportDue(startTime);
> scheduler.scheduleNextHeartbeat();
> if (!dn.areHeartbeatsDisabledForTests()) {
>   resp = sendHeartBeat(requestBlockReportLease);
>   assert resp != null;
>   if (resp.getFullBlockReportLeaseId() != 0) {
> if (fullBlockReportLeaseId != 0) {
>   LOG.warn(nnAddr + " sent back a full block report lease " +
>   "ID of 0x" +
>   Long.toHexString(resp.getFullBlockReportLeaseId()) +
>   ", but we already have a lease ID of 0x" +
>   Long.toHexString(fullBlockReportLeaseId) + ". " +
>   "Overwriting old lease ID.");
> }
> fullBlockReportLeaseId = resp.getFullBlockReportLeaseId();
>   }
>  
> }
>   }
>
>  
>   if ((fullBlockReportLeaseId != 0) || forceFullBr) {
> //Exception occurred here when NN restarting
> cmds = blockReport(fullBlockReportLeaseId);
> fullBlockReportLeaseId = 0;
>   }
>   
> } catch(RemoteException re) {
>   
>   } // while (shouldRun())
> } // offerService{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1072) Implement RetryProxy and FailoverProxy for OM client

2019-03-01 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782273#comment-16782273
 ] 

Hanisha Koneru commented on HDDS-1072:
--

Reverted from trunk.

> Implement RetryProxy and FailoverProxy for OM client
> 
>
> Key: HDDS-1072
> URL: https://issues.apache.org/jira/browse/HDDS-1072
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-1072.001.patch, HDDS-1072.002.patch, 
> HDDS-1072.003.patch, HDDS-1072.004.patch, HDDS-1072.005.patch, 
> HDDS-1072.006.patch
>
>
> RPC Client should implement a retry and failover proxy provider to failover 
> between OM Ratis clients. The failover should occur in two scenarios:
> # When the client is unable to connect to the OM (either because of network 
> issues or because the OM is down). The client retry proxy provider should 
> failover to next OM in the cluster.
> # When OM Ratis Client receives a response from the Ratis server for its 
> request, it also gets the LeaderId of server which processed this request 
> (the current Leader OM nodeId). This information should be propagated back to 
> the client. The Client failover Proxy provider should failover to the leader 
> OM node. This helps avoid an extra hop from Follower OM Ratis Client to 
> Leader OM Ratis server for every request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1072) Implement RetryProxy and FailoverProxy for OM client

2019-03-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782272#comment-16782272
 ] 

Hudson commented on HDDS-1072:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16108 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16108/])
Revert "HDDS-1072. Implement RetryProxy and FailoverProxy for OM 
(hanishakoneru: rev bc6fe7ad45986410afa1581572272913aa93e5ec)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java
* (add) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/ha/package-info.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/ratis/TestOzoneManagerRatisServer.java
* (add) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/ha/OMProxyInfo.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OMRatisHelper.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/ha/OMFailoverProxyProvider.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHA.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerRatisClient.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocolPB/OzoneManagerProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerProtocol.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rest/RestClient.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/protocol/ClientProtocol.java
* (delete) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/ha/package-info.java
* (add) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/ha/OMProxyProvider.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneHAClusterImpl.java


> Implement RetryProxy and FailoverProxy for OM client
> 
>
> Key: HDDS-1072
> URL: https://issues.apache.org/jira/browse/HDDS-1072
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-1072.001.patch, HDDS-1072.002.patch, 
> HDDS-1072.003.patch, HDDS-1072.004.patch, HDDS-1072.005.patch, 
> HDDS-1072.006.patch
>
>
> RPC Client should implement a retry and failover proxy provider to failover 
> between OM Ratis clients. The failover should occur in two scenarios:
> # When the client is unable to connect to the OM (either because of network 
> issues or because the OM is down). The client retry proxy provider should 
> failover to next OM in the cluster.
> # When OM Ratis Client receives a response from the Ratis server for its 
> request, it also gets the LeaderId of server which processed this request 
> (the current Leader OM nodeId). This information should be propagated back to 
> the client. The Client failover Proxy provider should failover to the leader 
> OM node. This helps avoid an extra hop from Follower OM Ratis Client to 
> Leader OM Ratis server for every request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14323) Distcp fails in Hadoop 3.x when 2.x source webhdfs url has special characters in hdfs file path

2019-03-01 Thread Srinivasu Majeti (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782256#comment-16782256
 ] 

Srinivasu Majeti edited comment on HDFS-14323 at 3/2/19 3:43 AM:
-

Hi [~jojochuang],

 Its with HDP 3.1  [ having HDFS 3.1.1 ] . hdfs dfs ls works and distcp with 
hdfs://hostname:8020 also works fine . Its only the url encoding on 3.1 side 
and failing with decoding at 2.6 side as it does have decoding feature. We have 
given the fix yesterday to one of customers and its working fine with the patch 
attached here :) . Further information on hadoop versions - The feature of 
encoding/decoding started from release-3.1.0-RC0 and not present upto - 
release-3.0.3-RC0. [~zvenczel] can comment and confirm .


was (Author: smajeti):
Hi [~jojochuang],

 Its with HDP 3.1  [ having HDFS 3.1.1 ] . hdfs dfs ls works and distcp with 
hdfs://hostname:8020 also works fine . Its only the url encoding on 3.1 side 
and failing with decoding at 2.6 side as it does have decoding feature. We have 
given the fix yesterday to one of customers and its working fine with the patch 
attached here :) .

> Distcp fails in Hadoop 3.x when 2.x source webhdfs url has special characters 
> in hdfs file path
> ---
>
> Key: HDFS-14323
> URL: https://issues.apache.org/jira/browse/HDFS-14323
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0
>Reporter: Srinivasu Majeti
>Priority: Major
> Attachments: HDFS-14323v0.patch
>
>
> There was an enhancement to allow semicolon in source/target URLs for distcp 
> use case as part of HDFS-13176 and backward compatibility fix as part of 
> HDFS-13582 . Still there seems to be an issue when trying to trigger distcp 
> from 3.x cluster to pull webhdfs data from 2.x hadoop cluster. We might need 
> to deal with existing fix as described below by making sure if url is already 
> encoded or not. That fixes it. 
> diff --git 
> a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
>  
> b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
> index 5936603c34a..dc790286aff 100644
> --- 
> a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
> +++ 
> b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
> @@ -609,7 +609,10 @@ URL toUrl(final HttpOpParam.Op op, final Path fspath,
>  boolean pathAlreadyEncoded = false;
>  try {
>  fspathUriDecoded = URLDecoder.decode(fspathUri.getPath(), "UTF-8");
> - pathAlreadyEncoded = true;
> + if(!fspathUri.getPath().equals(fspathUriDecoded))
> + {
> + pathAlreadyEncoded = true;
> + }
>  } catch (IllegalArgumentException ex) {
>  LOG.trace("Cannot decode URL encoded file", ex);
>  }
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14323) Distcp fails in Hadoop 3.x when 2.x source webhdfs url has special characters in hdfs file path

2019-03-01 Thread Srinivasu Majeti (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782256#comment-16782256
 ] 

Srinivasu Majeti edited comment on HDFS-14323 at 3/2/19 3:23 AM:
-

Hi [~jojochuang],

 Its with HDP 3.1  [ having HDFS 3.1.1 ] . hdfs dfs ls works and distcp with 
hdfs://hostname:8020 also works fine . Its only the url encoding on 3.1 side 
and failing with decoding at 2.6 side as it does have decoding feature. We have 
given the fix yesterday to one of customers and its working fine with the patch 
attached here :) .


was (Author: smajeti):
Hi [~jojochuang],

 Its with HDP 3.1  [ having HDFS 3.1.1 ] . hdfs dfs ls works and distcp with 
hdfs://hostname:8020 also works fine . Its only the url encoding on 3.1 side 
and failing with decoding at 2.6 side as it does have decoding feature. We have 
given the fix yesterday to WalMart and its working fine with the patch attached 
here :) .

> Distcp fails in Hadoop 3.x when 2.x source webhdfs url has special characters 
> in hdfs file path
> ---
>
> Key: HDFS-14323
> URL: https://issues.apache.org/jira/browse/HDFS-14323
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0
>Reporter: Srinivasu Majeti
>Priority: Major
> Attachments: HDFS-14323v0.patch
>
>
> There was an enhancement to allow semicolon in source/target URLs for distcp 
> use case as part of HDFS-13176 and backward compatibility fix as part of 
> HDFS-13582 . Still there seems to be an issue when trying to trigger distcp 
> from 3.x cluster to pull webhdfs data from 2.x hadoop cluster. We might need 
> to deal with existing fix as described below by making sure if url is already 
> encoded or not. That fixes it. 
> diff --git 
> a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
>  
> b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
> index 5936603c34a..dc790286aff 100644
> --- 
> a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
> +++ 
> b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
> @@ -609,7 +609,10 @@ URL toUrl(final HttpOpParam.Op op, final Path fspath,
>  boolean pathAlreadyEncoded = false;
>  try {
>  fspathUriDecoded = URLDecoder.decode(fspathUri.getPath(), "UTF-8");
> - pathAlreadyEncoded = true;
> + if(!fspathUri.getPath().equals(fspathUriDecoded))
> + {
> + pathAlreadyEncoded = true;
> + }
>  } catch (IllegalArgumentException ex) {
>  LOG.trace("Cannot decode URL encoded file", ex);
>  }
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14323) Distcp fails in Hadoop 3.x when 2.x source webhdfs url has special characters in hdfs file path

2019-03-01 Thread Srinivasu Majeti (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782256#comment-16782256
 ] 

Srinivasu Majeti commented on HDFS-14323:
-

Hi [~jojochuang],

 Its with HDP 3.1  [ having HDFS 3.1.1 ] . hdfs dfs ls works and distcp with 
hdfs://hostname:8020 also works fine . Its only the url encoding on 3.1 side 
and failing with decoding at 2.6 side as it does have decoding feature. We have 
given the fix yesterday to WalMart and its working fine with the patch attached 
here :) .

> Distcp fails in Hadoop 3.x when 2.x source webhdfs url has special characters 
> in hdfs file path
> ---
>
> Key: HDFS-14323
> URL: https://issues.apache.org/jira/browse/HDFS-14323
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0
>Reporter: Srinivasu Majeti
>Priority: Major
> Attachments: HDFS-14323v0.patch
>
>
> There was an enhancement to allow semicolon in source/target URLs for distcp 
> use case as part of HDFS-13176 and backward compatibility fix as part of 
> HDFS-13582 . Still there seems to be an issue when trying to trigger distcp 
> from 3.x cluster to pull webhdfs data from 2.x hadoop cluster. We might need 
> to deal with existing fix as described below by making sure if url is already 
> encoded or not. That fixes it. 
> diff --git 
> a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
>  
> b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
> index 5936603c34a..dc790286aff 100644
> --- 
> a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
> +++ 
> b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
> @@ -609,7 +609,10 @@ URL toUrl(final HttpOpParam.Op op, final Path fspath,
>  boolean pathAlreadyEncoded = false;
>  try {
>  fspathUriDecoded = URLDecoder.decode(fspathUri.getPath(), "UTF-8");
> - pathAlreadyEncoded = true;
> + if(!fspathUri.getPath().equals(fspathUriDecoded))
> + {
> + pathAlreadyEncoded = true;
> + }
>  } catch (IllegalArgumentException ex) {
>  LOG.trace("Cannot decode URL encoded file", ex);
>  }
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14314) fullBlockReportLeaseId should be reset after registering to NN

2019-03-01 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782254#comment-16782254
 ] 

Wei-Chiu Chuang commented on HDFS-14314:


Looks like the new test (testRefreshLeaseId) succeeds without the fix, and that 
makes it difficult to verify the fix. Could you check again?

> fullBlockReportLeaseId should be reset after registering to NN
> --
>
> Key: HDFS-14314
> URL: https://issues.apache.org/jira/browse/HDFS-14314
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.8.4
> Environment:  
>  
>  
>Reporter: star
>Assignee: star
>Priority: Critical
> Fix For: 2.8.4
>
> Attachments: HDFS-14314-trunk.001.patch, HDFS-14314-trunk.001.patch, 
> HDFS-14314-trunk.002.patch, HDFS-14314-trunk.003.patch, 
> HDFS-14314-trunk.004.patch, HDFS-14314-trunk.005.patch, HDFS-14314.0.patch, 
> HDFS-14314.2.patch, HDFS-14314.patch
>
>
>   since HDFS-7923 ,to rate-limit DN block report, DN will ask for a full 
> block lease id from active NN before sending full block to NN. Then DN will 
> send full block report together with lease id. If the lease id is invalid, NN 
> will reject the full block report and log "not in the pending set".
>   In a case when DN is doing full block reporting while NN is restarted. 
> It happens that DN will later send a full block report with lease id 
> ,acquired from previous NN instance, which is invalid to the new NN instance. 
> Though DN recognized the new NN instance by heartbeat and reregister itself, 
> it did not reset the lease id from previous instance.
>   The issuse may cause DNs to temporarily go dead, making it unsafe to 
> restart NN especially in hadoop cluster which has large amount of DNs. 
> HDFS-12914 reported the issue  without any clues why it occurred and remain 
> unsolved.
>    To make it clear, look at code below. We take it from method 
> offerService of class BPServiceActor. We eliminate some code to focus on 
> current issue. fullBlockReportLeaseId is a local variable to hold lease id 
> from NN. Exceptions will occur at blockReport call when NN restarting, which 
> will be caught by catch block in while loop. Thus fullBlockReportLeaseId will 
> not be set to 0. After NN restarted, DN will send full block report which 
> will be rejected by the new NN instance. DN will never send full block report 
> until the next full block report schedule, about an hour later.
>   Solution is simple, just reset fullBlockReportLeaseId to 0 after any 
> exception or after registering to NN. Thus it will ask for a valid 
> fullBlockReportLeaseId from new NN instance.
> {code:java}
> private void offerService() throws Exception {
>   long fullBlockReportLeaseId = 0;
>   //
>   // Now loop for a long time
>   //
>   while (shouldRun()) {
> try {
>   final long startTime = scheduler.monotonicNow();
>   //
>   // Every so often, send heartbeat or block-report
>   //
>   final boolean sendHeartbeat = scheduler.isHeartbeatDue(startTime);
>   HeartbeatResponse resp = null;
>   if (sendHeartbeat) {
>   
> boolean requestBlockReportLease = (fullBlockReportLeaseId == 0) &&
> scheduler.isBlockReportDue(startTime);
> scheduler.scheduleNextHeartbeat();
> if (!dn.areHeartbeatsDisabledForTests()) {
>   resp = sendHeartBeat(requestBlockReportLease);
>   assert resp != null;
>   if (resp.getFullBlockReportLeaseId() != 0) {
> if (fullBlockReportLeaseId != 0) {
>   LOG.warn(nnAddr + " sent back a full block report lease " +
>   "ID of 0x" +
>   Long.toHexString(resp.getFullBlockReportLeaseId()) +
>   ", but we already have a lease ID of 0x" +
>   Long.toHexString(fullBlockReportLeaseId) + ". " +
>   "Overwriting old lease ID.");
> }
> fullBlockReportLeaseId = resp.getFullBlockReportLeaseId();
>   }
>  
> }
>   }
>
>  
>   if ((fullBlockReportLeaseId != 0) || forceFullBr) {
> //Exception occurred here when NN restarting
> cmds = blockReport(fullBlockReportLeaseId);
> fullBlockReportLeaseId = 0;
>   }
>   
> } catch(RemoteException re) {
>   
>   } // while (shouldRun())
> } // offerService{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14111) hdfsOpenFile on HDFS causes unnecessary IO from file offset 0

2019-03-01 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782252#comment-16782252
 ] 

Wei-Chiu Chuang commented on HDFS-14111:


Other than the errorno which I have no idea why, the patch looks good to me.

> hdfsOpenFile on HDFS causes unnecessary IO from file offset 0
> -
>
> Key: HDFS-14111
> URL: https://issues.apache.org/jira/browse/HDFS-14111
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, libhdfs
>Affects Versions: 3.2.0
>Reporter: Todd Lipcon
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HDFS-14111.001.patch, HDFS-14111.002.patch, 
> HDFS-14111.003.patch
>
>
> hdfsOpenFile() calls readDirect() with a 0-length argument in order to check 
> whether the underlying stream supports bytebuffer reads. With DFSInputStream, 
> the read(0) isn't short circuited, and results in the DFSClient opening a 
> block reader. In the case of a remote block, the block reader will actually 
> issue a read of the whole block, causing the datanode to perform unnecessary 
> IO and network transfers in order to fill up the client's TCP buffers. This 
> causes performance degradation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-3246) pRead equivalent for direct read path

2019-03-01 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HDFS-3246:
---
Status: Patch Available  (was: Open)

> pRead equivalent for direct read path
> -
>
> Key: HDFS-3246
> URL: https://issues.apache.org/jira/browse/HDFS-3246
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, performance
>Affects Versions: 3.0.0-alpha1
>Reporter: Henry Robinson
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HDFS-3246.001.patch, HDFS-3246.002.patch, 
> HDFS-3246.003.patch, HDFS-3246.004.patch, HDFS-3246.005.patch, 
> HDFS-3246.006.patch
>
>
> There is no pread equivalent in ByteBufferReadable. We should consider adding 
> one. It would be relatively easy to implement for the distributed case 
> (certainly compared to HDFS-2834), since DFSInputStream does most of the 
> heavy lifting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-3246) pRead equivalent for direct read path

2019-03-01 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HDFS-3246:
---
Status: Open  (was: Patch Available)

> pRead equivalent for direct read path
> -
>
> Key: HDFS-3246
> URL: https://issues.apache.org/jira/browse/HDFS-3246
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, performance
>Affects Versions: 3.0.0-alpha1
>Reporter: Henry Robinson
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HDFS-3246.001.patch, HDFS-3246.002.patch, 
> HDFS-3246.003.patch, HDFS-3246.004.patch, HDFS-3246.005.patch, 
> HDFS-3246.006.patch
>
>
> There is no pread equivalent in ByteBufferReadable. We should consider adding 
> one. It would be relatively easy to implement for the distributed case 
> (certainly compared to HDFS-2834), since DFSInputStream does most of the 
> heavy lifting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-3246) pRead equivalent for direct read path

2019-03-01 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HDFS-3246:
---
Attachment: HDFS-3246.006.patch

> pRead equivalent for direct read path
> -
>
> Key: HDFS-3246
> URL: https://issues.apache.org/jira/browse/HDFS-3246
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, performance
>Affects Versions: 3.0.0-alpha1
>Reporter: Henry Robinson
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HDFS-3246.001.patch, HDFS-3246.002.patch, 
> HDFS-3246.003.patch, HDFS-3246.004.patch, HDFS-3246.005.patch, 
> HDFS-3246.006.patch
>
>
> There is no pread equivalent in ByteBufferReadable. We should consider adding 
> one. It would be relatively easy to implement for the distributed case 
> (certainly compared to HDFS-2834), since DFSInputStream does most of the 
> heavy lifting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14321) Fix -Xcheck:jni issues in libhdfs, run ctest with -Xcheck:jni enabled

2019-03-01 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782240#comment-16782240
 ] 

Wei-Chiu Chuang commented on HDFS-14321:


LGTM pending Jenkins

> Fix -Xcheck:jni issues in libhdfs, run ctest with -Xcheck:jni enabled
> -
>
> Key: HDFS-14321
> URL: https://issues.apache.org/jira/browse/HDFS-14321
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HDFS-14321.001.patch
>
>
> The JVM exposes an option called {{-Xcheck:jni}} which runs various checks 
> against JNI usage by applications. Further explanation of this JVM option can 
> be found in: 
> [https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/clopts002.html]
>  and 
> [https://www.ibm.com/support/knowledgecenter/en/SSYKE2_8.0.0/com.ibm.java.vm.80.doc/docs/jni_debug.html].
>  When run with this option, the JVM will print out any warnings or errors it 
> encounters with the JNI.
> We should run the libhdfs tests with {{-Xcheck:jni}} (can be added to 
> {{LIBHDFS_OPTS}}) and fix any warnings / errors. We should add this option to 
> our ctest runs as well to ensure no regressions are introduced to libhdfs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-3246) pRead equivalent for direct read path

2019-03-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782238#comment-16782238
 ] 

Hadoop QA commented on HDFS-3246:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 59s{color} | {color:orange} root: The patch generated 5 new + 116 unchanged 
- 2 fixed = 121 total (was 118) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 26s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
1s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
34s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}201m 10s{color} | 
{color:black} {color} |
\\
\\
|| 

[jira] [Updated] (HDFS-14270) [SBN Read] StateId and TrasactionId not present in Trace level logging

2019-03-01 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-14270:
--
Attachment: HDFS-14270.001.patch
Status: Patch Available  (was: In Progress)

Added trace logging in Server#processRpcRequest(). Please review.

Thanks.

> [SBN Read] StateId and TrasactionId not present in Trace level logging
> --
>
> Key: HDFS-14270
> URL: https://issues.apache.org/jira/browse/HDFS-14270
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Shweta
>Assignee: Shweta
>Priority: Trivial
> Attachments: HDFS-14270.001.patch
>
>
> While running the command "hdfs --loglevel TRACE dfs -ls /" it was seen that 
> stateId and TransactionId do not appear in the logs. 
How does one see the 
> stateId and TransactionId in the logs? Is there a different approach?
> CC: [~jojochuang], [~csun], [~shv]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-14270) [SBN Read] StateId and TrasactionId not present in Trace level logging

2019-03-01 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-14270 started by Shweta.
-
> [SBN Read] StateId and TrasactionId not present in Trace level logging
> --
>
> Key: HDFS-14270
> URL: https://issues.apache.org/jira/browse/HDFS-14270
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Shweta
>Assignee: Shweta
>Priority: Trivial
>
> While running the command "hdfs --loglevel TRACE dfs -ls /" it was seen that 
> stateId and TransactionId do not appear in the logs. 
How does one see the 
> stateId and TransactionId in the logs? Is there a different approach?
> CC: [~jojochuang], [~csun], [~shv]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-807) Period should be an invalid character in bucket names

2019-03-01 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-807:
-
Attachment: HDDS-807.patch

> Period should be an invalid character in bucket names
> -
>
> Key: HDDS-807
> URL: https://issues.apache.org/jira/browse/HDDS-807
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Critical
>  Labels: newbie
> Attachments: HDDS-807.patch
>
>
> ozonefs paths use the following syntax:
> - o3fs://bucket.volume/..
> The OM host and port are read from configuration. We cannot specify a target 
> filesystem with a fully qualified path. E.g. 
> _o3fs://bucket.volume.om-host.example.com:9862/. Hence we cannot hand a fully 
> qualified URL with OM hostname to a client without setting up config files 
> beforehand. This is inconvenient. It also means there is no way to perform a 
> distcp from one Ozone cluster to another.
> We need a way to support fully qualified paths with OM hostname and port 
> _bucket.volume.om-host.example.com_. If we allow periods in bucketnames, then 
> such fully qualified paths cannot be parsed unambiguously. However if er 
> disallow periods, then we can support all of the following paths 
> unambiguously.
>  # *o3fs://bucket.volume/key* - The authority has only two period-separated 
> components. These must be bucket and volume name respectively.
>  # *o3fs://bucket.volume.om-host.example.com/key* - The authority has more 
> than two components. The first two must be bucket and volume, the rest must 
> be the hostname.
>  # *o3fs://bucket.volume.om-host.example.com:5678/key* - Similar to #2, 
> except with a port number.
>  
> Open question is around HA support. I believe for HA we will have to 
> introduce the notion of a _nameservice_, similar to HDFS nameservice. This 
> will allow a fourth kind of Ozone URL:
>  - *o3fs://bucket.volume.ns1/key* - How do we distinguish this from #3 above? 
> One way could be to find if _ns1_ is known as an Ozone nameservice via 
> configuration. If so then treat it as the name of an HA service. Else treat 
> it as a hostname.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-453) OM and SCM should use picocli to parse arguments

2019-03-01 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-453:


Assignee: Aravindan Vijayan  (was: Siddharth Wagle)

> OM and SCM should use picocli to parse arguments
> 
>
> Key: HDDS-453
> URL: https://issues.apache.org/jira/browse/HDDS-453
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager, SCM
>Reporter: Arpit Agarwal
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: alpha2, newbie
>
> SCM and OM can use the picocli to parse command-line arguments.
> Suggested in HDDS-415 by [~anu].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-807) Period should be an invalid character in bucket names

2019-03-01 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-807:
-
Attachment: (was: HDDS-807.patch)

> Period should be an invalid character in bucket names
> -
>
> Key: HDDS-807
> URL: https://issues.apache.org/jira/browse/HDDS-807
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Critical
>  Labels: newbie
>
> ozonefs paths use the following syntax:
> - o3fs://bucket.volume/..
> The OM host and port are read from configuration. We cannot specify a target 
> filesystem with a fully qualified path. E.g. 
> _o3fs://bucket.volume.om-host.example.com:9862/. Hence we cannot hand a fully 
> qualified URL with OM hostname to a client without setting up config files 
> beforehand. This is inconvenient. It also means there is no way to perform a 
> distcp from one Ozone cluster to another.
> We need a way to support fully qualified paths with OM hostname and port 
> _bucket.volume.om-host.example.com_. If we allow periods in bucketnames, then 
> such fully qualified paths cannot be parsed unambiguously. However if er 
> disallow periods, then we can support all of the following paths 
> unambiguously.
>  # *o3fs://bucket.volume/key* - The authority has only two period-separated 
> components. These must be bucket and volume name respectively.
>  # *o3fs://bucket.volume.om-host.example.com/key* - The authority has more 
> than two components. The first two must be bucket and volume, the rest must 
> be the hostname.
>  # *o3fs://bucket.volume.om-host.example.com:5678/key* - Similar to #2, 
> except with a port number.
>  
> Open question is around HA support. I believe for HA we will have to 
> introduce the notion of a _nameservice_, similar to HDFS nameservice. This 
> will allow a fourth kind of Ozone URL:
>  - *o3fs://bucket.volume.ns1/key* - How do we distinguish this from #3 above? 
> One way could be to find if _ns1_ is known as an Ozone nameservice via 
> configuration. If so then treat it as the name of an HA service. Else treat 
> it as a hostname.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-807) Period should be an invalid character in bucket names

2019-03-01 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-807:
-
Status: Patch Available  (was: In Progress)

> Period should be an invalid character in bucket names
> -
>
> Key: HDDS-807
> URL: https://issues.apache.org/jira/browse/HDDS-807
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Critical
>  Labels: newbie
> Attachments: HDDS-807.01.patch
>
>
> ozonefs paths use the following syntax:
> - o3fs://bucket.volume/..
> The OM host and port are read from configuration. We cannot specify a target 
> filesystem with a fully qualified path. E.g. 
> _o3fs://bucket.volume.om-host.example.com:9862/. Hence we cannot hand a fully 
> qualified URL with OM hostname to a client without setting up config files 
> beforehand. This is inconvenient. It also means there is no way to perform a 
> distcp from one Ozone cluster to another.
> We need a way to support fully qualified paths with OM hostname and port 
> _bucket.volume.om-host.example.com_. If we allow periods in bucketnames, then 
> such fully qualified paths cannot be parsed unambiguously. However if er 
> disallow periods, then we can support all of the following paths 
> unambiguously.
>  # *o3fs://bucket.volume/key* - The authority has only two period-separated 
> components. These must be bucket and volume name respectively.
>  # *o3fs://bucket.volume.om-host.example.com/key* - The authority has more 
> than two components. The first two must be bucket and volume, the rest must 
> be the hostname.
>  # *o3fs://bucket.volume.om-host.example.com:5678/key* - Similar to #2, 
> except with a port number.
>  
> Open question is around HA support. I believe for HA we will have to 
> introduce the notion of a _nameservice_, similar to HDFS nameservice. This 
> will allow a fourth kind of Ozone URL:
>  - *o3fs://bucket.volume.ns1/key* - How do we distinguish this from #3 above? 
> One way could be to find if _ns1_ is known as an Ozone nameservice via 
> configuration. If so then treat it as the name of an HA service. Else treat 
> it as a hostname.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-807) Period should be an invalid character in bucket names

2019-03-01 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-807:
-
Attachment: HDDS-807.01.patch

> Period should be an invalid character in bucket names
> -
>
> Key: HDDS-807
> URL: https://issues.apache.org/jira/browse/HDDS-807
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Critical
>  Labels: newbie
> Attachments: HDDS-807.01.patch
>
>
> ozonefs paths use the following syntax:
> - o3fs://bucket.volume/..
> The OM host and port are read from configuration. We cannot specify a target 
> filesystem with a fully qualified path. E.g. 
> _o3fs://bucket.volume.om-host.example.com:9862/. Hence we cannot hand a fully 
> qualified URL with OM hostname to a client without setting up config files 
> beforehand. This is inconvenient. It also means there is no way to perform a 
> distcp from one Ozone cluster to another.
> We need a way to support fully qualified paths with OM hostname and port 
> _bucket.volume.om-host.example.com_. If we allow periods in bucketnames, then 
> such fully qualified paths cannot be parsed unambiguously. However if er 
> disallow periods, then we can support all of the following paths 
> unambiguously.
>  # *o3fs://bucket.volume/key* - The authority has only two period-separated 
> components. These must be bucket and volume name respectively.
>  # *o3fs://bucket.volume.om-host.example.com/key* - The authority has more 
> than two components. The first two must be bucket and volume, the rest must 
> be the hostname.
>  # *o3fs://bucket.volume.om-host.example.com:5678/key* - Similar to #2, 
> except with a port number.
>  
> Open question is around HA support. I believe for HA we will have to 
> introduce the notion of a _nameservice_, similar to HDFS nameservice. This 
> will allow a fourth kind of Ozone URL:
>  - *o3fs://bucket.volume.ns1/key* - How do we distinguish this from #3 above? 
> One way could be to find if _ns1_ is known as an Ozone nameservice via 
> configuration. If so then treat it as the name of an HA service. Else treat 
> it as a hostname.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14317) Standby does not trigger edit log rolling when in-progress edit log tailing is enabled

2019-03-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782223#comment-16782223
 ] 

Hadoop QA commented on HDFS-14317:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 608 unchanged - 4 fixed = 608 total (was 612) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14317 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960841/HDFS-14317.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux cd6b7f6a80f7 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5a15f7b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Updated] (HDFS-14205) Backport HDFS-6440 to branch-2

2019-03-01 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-14205:

Attachment: HDFS-14205-branch-2.006.patch

> Backport HDFS-6440 to branch-2
> --
>
> Key: HDFS-14205
> URL: https://issues.apache.org/jira/browse/HDFS-14205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14205-branch-2.001.patch, 
> HDFS-14205-branch-2.002.patch, HDFS-14205-branch-2.003.patch, 
> HDFS-14205-branch-2.004.patch, HDFS-14205-branch-2.005.patch, 
> HDFS-14205-branch-2.006.patch
>
>
> Currently support for more than 2 NameNodes (HDFS-6440) is only in branch-3. 
> This JIRA aims to backport it to branch-2, as this is required by HDFS-12943 
> (consistent read from standby) backport to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14320) Support skipTrash for WebHDFS

2019-03-01 Thread Karthik Palanisamy (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782219#comment-16782219
 ] 

Karthik Palanisamy commented on HDFS-14320:
---

Thank you [~jojochuang]. I will upload a new patch with doc update.

> Support skipTrash for WebHDFS 
> --
>
> Key: HDFS-14320
> URL: https://issues.apache.org/jira/browse/HDFS-14320
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode, webhdfs
>Affects Versions: 3.2.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
> Attachments: HDFS-14320-001.patch, HDFS-14320-002.patch, 
> HDFS-14320-003.patch, HDFS-14320-004.patch, HDFS-14320-005.patch
>
>
> Files/Directories deleted via webhdfs rest call doesn't use the skiptrash 
> feature, it would be deleted permanently. This feature is very important us 
> because our user has deleted large directory accidentally.
> By default, Skiptrash option is set to true, skiptrash=true. Any files, Using 
> CURL will be permanently deleted.
> Example:
> curl -iv -X DELETE 
> "http://:50070/webhdfs/v1/tmp/sampledata?op=DELETE=hdfs=true;
>  
> Use skiptrash=false, to move files to trash Instead.
> Example:
> curl -iv -X DELETE 
> "http://:50070/webhdfs/v1/tmp/sampledata?op=DELETE=hdfs=true=false;
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14259) RBF: Fix safemode message for Router

2019-03-01 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782216#comment-16782216
 ] 

Íñigo Goiri commented on HDFS-14259:


Thanks [~RANith] for tackling my comments.
The unit test is unrelated (we should check what's wrong here).
+1 on  [^HDFS-14259-HDFS-13891.002.patch].


> RBF: Fix safemode message for Router
> 
>
> Key: HDFS-14259
> URL: https://issues.apache.org/jira/browse/HDFS-14259
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14259-HDFS-13891.000.patch, 
> HDFS-14259-HDFS-13891.001.patch, HDFS-14259-HDFS-13891.002.patch
>
>
> Currently, the {{getSafemode()}} bean checks the state of the Router but 
> returns the error if the status is different than SAFEMODE:
> {code}
>   public String getSafemode() {
>   if (!getRouter().isRouterState(RouterServiceState.SAFEMODE)) {
> return "Safe mode is ON. " + this.getSafeModeTip();
>   }
> } catch (IOException e) {
>   return "Failed to get safemode status. Please check router"
>   + "log for more detail.";
> }
> return "";
>   }
> {code}
> The condition should be reversed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14321) Fix -Xcheck:jni issues in libhdfs, run ctest with -Xcheck:jni enabled

2019-03-01 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782214#comment-16782214
 ] 

Sahil Takiar commented on HDFS-14321:
-

Digging into this some more, I think {{hadoopRzOptions::byteBufferPool}} should 
be a global ref. If {{hadoopRzOptionsSetByteBufferPool}} simply created a local 
ref to {{hadoopRzOptions::byteBufferPool}} then the reference would be lost 
once the porcess execution returned back to Java. By using a global ref, we 
ensure that {{byteBufferPool}} does not get garbage collected by the JVM. Since 
the {{byteBufferPool}} is expected to live across calls to {{hadoopReadZero}}, 
using a local ref does not make sense.

This is based on my understanding of the JNI and the difference between local 
vs. global references: 
http://journals.ecs.soton.ac.uk/java/tutorial/native1.1/implementing/refs.html 
I'm not a JNI expert, so my understanding might be off, but this patch fixes 
the {{FATAL ERROR}}.

The second part of this patch is to add {{-Xcheck:jni}} to {{LIBHDFS_OPTS}} 
when running all the libhdfs ctests. The drawback here is that adding this 
pollutes the logs with a bunch of warnings about exception handling (see 
above). The benefit is that it ensures we don't make any changes to libhdfs 
that would results in more fatal errors. IMO I think we can live with the 
extraneous logging, but open to changing this if others feel differently.

> Fix -Xcheck:jni issues in libhdfs, run ctest with -Xcheck:jni enabled
> -
>
> Key: HDFS-14321
> URL: https://issues.apache.org/jira/browse/HDFS-14321
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HDFS-14321.001.patch
>
>
> The JVM exposes an option called {{-Xcheck:jni}} which runs various checks 
> against JNI usage by applications. Further explanation of this JVM option can 
> be found in: 
> [https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/clopts002.html]
>  and 
> [https://www.ibm.com/support/knowledgecenter/en/SSYKE2_8.0.0/com.ibm.java.vm.80.doc/docs/jni_debug.html].
>  When run with this option, the JVM will print out any warnings or errors it 
> encounters with the JNI.
> We should run the libhdfs tests with {{-Xcheck:jni}} (can be added to 
> {{LIBHDFS_OPTS}}) and fix any warnings / errors. We should add this option to 
> our ctest runs as well to ensure no regressions are introduced to libhdfs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations

2019-03-01 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14316:
---
Attachment: HDFS-14316-HDFS-13891.005.patch

> RBF: Support unavailable subclusters for mount points with multiple 
> destinations
> 
>
> Key: HDFS-14316
> URL: https://issues.apache.org/jira/browse/HDFS-14316
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14316-HDFS-13891.000.patch, 
> HDFS-14316-HDFS-13891.001.patch, HDFS-14316-HDFS-13891.002.patch, 
> HDFS-14316-HDFS-13891.003.patch, HDFS-14316-HDFS-13891.004.patch, 
> HDFS-14316-HDFS-13891.005.patch
>
>
> Currently mount points with multiple destinations (e.g., HASH_ALL) fail 
> writes when the destination subcluster is down. We need an option to allow 
> writing in other subclusters when one is down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14321) Fix -Xcheck:jni issues in libhdfs, run ctest with -Xcheck:jni enabled

2019-03-01 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HDFS-14321:

Attachment: HDFS-14321.001.patch

> Fix -Xcheck:jni issues in libhdfs, run ctest with -Xcheck:jni enabled
> -
>
> Key: HDFS-14321
> URL: https://issues.apache.org/jira/browse/HDFS-14321
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HDFS-14321.001.patch
>
>
> The JVM exposes an option called {{-Xcheck:jni}} which runs various checks 
> against JNI usage by applications. Further explanation of this JVM option can 
> be found in: 
> [https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/clopts002.html]
>  and 
> [https://www.ibm.com/support/knowledgecenter/en/SSYKE2_8.0.0/com.ibm.java.vm.80.doc/docs/jni_debug.html].
>  When run with this option, the JVM will print out any warnings or errors it 
> encounters with the JNI.
> We should run the libhdfs tests with {{-Xcheck:jni}} (can be added to 
> {{LIBHDFS_OPTS}}) and fix any warnings / errors. We should add this option to 
> our ctest runs as well to ensure no regressions are introduced to libhdfs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2019-03-01 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-134:

   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

Thanks [~ajayydv] for the contribution. I've committed the patch to trunk. 

> SCM CA: OM sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-134
> URL: https://issues.apache.org/jira/browse/HDDS-134
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
> Attachments: HDDS-134.00.patch, HDDS-134.01.patch, HDDS-134.02.patch, 
> HDDS-134.03.patch, HDDS-134.04.patch, HDDS-134.05.patch, HDDS-134.06.patch, 
> HDDS-134.07.patch, HDDS-134.08.patch, HDDS-134.09.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Initialize OM keypair and get SCM signed certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14321) Fix -Xcheck:jni issues in libhdfs, run ctest with -Xcheck:jni enabled

2019-03-01 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HDFS-14321:

Status: Patch Available  (was: Open)

> Fix -Xcheck:jni issues in libhdfs, run ctest with -Xcheck:jni enabled
> -
>
> Key: HDFS-14321
> URL: https://issues.apache.org/jira/browse/HDFS-14321
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HDFS-14321.001.patch
>
>
> The JVM exposes an option called {{-Xcheck:jni}} which runs various checks 
> against JNI usage by applications. Further explanation of this JVM option can 
> be found in: 
> [https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/clopts002.html]
>  and 
> [https://www.ibm.com/support/knowledgecenter/en/SSYKE2_8.0.0/com.ibm.java.vm.80.doc/docs/jni_debug.html].
>  When run with this option, the JVM will print out any warnings or errors it 
> encounters with the JNI.
> We should run the libhdfs tests with {{-Xcheck:jni}} (can be added to 
> {{LIBHDFS_OPTS}}) and fix any warnings / errors. We should add this option to 
> our ctest runs as well to ensure no regressions are introduced to libhdfs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-1072) Implement RetryProxy and FailoverProxy for OM client

2019-03-01 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru reopened HDDS-1072:
--

With HDDS-1072, Freon tests are failing with the following exception:
{code:java}
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClient(OzoneClientFactory.java:92)
at 
org.apache.hadoop.ozone.freon.RandomKeyGenerator.init(RandomKeyGenerator.java:213)
at 
org.apache.hadoop.ozone.freon.RandomKeyGenerator.call(RandomKeyGenerator.java:228)
at 
org.apache.hadoop.ozone.freon.RandomKeyGenerator.call(RandomKeyGenerator.java:82)
at picocli.CommandLine.execute(CommandLine.java:919)
at picocli.CommandLine.access$700(CommandLine.java:104)
at picocli.CommandLine$RunLast.handle(CommandLine.java:1083)
at picocli.CommandLine$RunLast.handle(CommandLine.java:1051)
at 
picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:959)
at picocli.CommandLine.parseWithHandlers(CommandLine.java:1242)
at picocli.CommandLine.parseWithHandler(CommandLine.java:1181)
at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:61)
at org.apache.hadoop.ozone.freon.Freon.execute(Freon.java:53)
at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:52)
at org.apache.hadoop.ozone.freon.Freon.main(Freon.java:79)
Caused by: java.lang.RuntimeException: java.lang.NoSuchFieldException: versionID
at org.apache.hadoop.ipc.RPC.getProtocolVersion(RPC.java:187)
at 
org.apache.hadoop.ipc.WritableRpcEngine$Invocation.(WritableRpcEngine.java:114)
at 
org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at com.sun.proxy.$Proxy13.submitRequest(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy13.submitRequest(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66)
at com.sun.proxy.$Proxy13.submitRequest(Unknown Source)
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:290)
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceList(OzoneManagerProtocolClientSideTranslatorPB.java:1101)
at 
org.apache.hadoop.ozone.client.rpc.RpcClient.getScmAddressForClient(RpcClient.java:214)
at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:148)
... 20 more
Caused by: java.lang.NoSuchFieldException: versionID
at java.lang.Class.getField(Class.java:1703)
at org.apache.hadoop.ipc.RPC.getProtocolVersion(RPC.java:182)
... 43 more
Couldn't create protocol class org.apache.hadoop.ozone.client.rpc.RpcClient
{code}

Thank you [~msingh] for reporting this. I am going to revert the patch from 
trunk and work on a fix.

> Implement RetryProxy and FailoverProxy for OM client
> 
>
> Key: HDDS-1072
> URL: https://issues.apache.org/jira/browse/HDDS-1072
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-1072.001.patch, HDDS-1072.002.patch, 
> HDDS-1072.003.patch, HDDS-1072.004.patch, HDDS-1072.005.patch, 
> HDDS-1072.006.patch
>
>
> RPC Client should implement a retry and failover proxy provider to failover 
> between OM Ratis 

[jira] [Commented] (HDFS-14317) Standby does not trigger edit log rolling when in-progress edit log tailing is enabled

2019-03-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782205#comment-16782205
 ] 

Hadoop QA commented on HDFS-14317:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 604 unchanged - 4 fixed = 604 total (was 608) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14317 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960826/HDFS-14317.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 60a343494990 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cab8529 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Comment Edited] (HDDS-1072) Implement RetryProxy and FailoverProxy for OM client

2019-03-01 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782201#comment-16782201
 ] 

Hanisha Koneru edited comment on HDDS-1072 at 3/1/19 11:22 PM:
---

With HDDS-1072, Freon tests are failing with the following exception:
{code:java}
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClient(OzoneClientFactory.java:92)
at 
org.apache.hadoop.ozone.freon.RandomKeyGenerator.init(RandomKeyGenerator.java:213)
at 
org.apache.hadoop.ozone.freon.RandomKeyGenerator.call(RandomKeyGenerator.java:228)
at 
org.apache.hadoop.ozone.freon.RandomKeyGenerator.call(RandomKeyGenerator.java:82)
at picocli.CommandLine.execute(CommandLine.java:919)
at picocli.CommandLine.access$700(CommandLine.java:104)
at picocli.CommandLine$RunLast.handle(CommandLine.java:1083)
at picocli.CommandLine$RunLast.handle(CommandLine.java:1051)
at 
picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:959)
at picocli.CommandLine.parseWithHandlers(CommandLine.java:1242)
at picocli.CommandLine.parseWithHandler(CommandLine.java:1181)
at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:61)
at org.apache.hadoop.ozone.freon.Freon.execute(Freon.java:53)
at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:52)
at org.apache.hadoop.ozone.freon.Freon.main(Freon.java:79)
Caused by: java.lang.RuntimeException: java.lang.NoSuchFieldException: versionID
at org.apache.hadoop.ipc.RPC.getProtocolVersion(RPC.java:187)
at 
org.apache.hadoop.ipc.WritableRpcEngine$Invocation.(WritableRpcEngine.java:114)
at 
org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:250)
at com.sun.proxy.$Proxy13.submitRequest(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy13.submitRequest(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66)
at com.sun.proxy.$Proxy13.submitRequest(Unknown Source)
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:290)
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceList(OzoneManagerProtocolClientSideTranslatorPB.java:1101)
at 
org.apache.hadoop.ozone.client.rpc.RpcClient.getScmAddressForClient(RpcClient.java:214)
at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:148)
... 20 more
Caused by: java.lang.NoSuchFieldException: versionID
at java.lang.Class.getField(Class.java:1703)
at org.apache.hadoop.ipc.RPC.getProtocolVersion(RPC.java:182)
... 43 more
Couldn't create protocol class org.apache.hadoop.ozone.client.rpc.RpcClient
{code}

Thank you [~msingh] and [~ajayydv] for reporting this. I am going to revert the 
patch from trunk and work on a fix.


was (Author: hanishakoneru):
With HDDS-1072, Freon tests are failing with the following exception:
{code:java}
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)

[jira] [Commented] (HDDS-1072) Implement RetryProxy and FailoverProxy for OM client

2019-03-01 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782197#comment-16782197
 ] 

Ajay Kumar commented on HDDS-1072:
--

this patch breaks almost all robot tests.

{code}java.lang.reflect.InvocationTargetException
at 
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
 Method)
at 
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at 
java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:169)
at 
org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:107)
at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.initialize(OzoneFileSystem.java:159)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:352)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:250)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:233)
at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:104)
at org.apache.hadoop.fs.shell.Command.run(Command.java:177)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:327)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:390)
Caused by: java.lang.RuntimeException: java.lang.NoSuchFieldException: versionID
at org.apache.hadoop.ipc.RPC.getProtocolVersion(RPC.java:183)
at 
org.apache.hadoop.ipc.WritableRpcEngine$Invocation.(WritableRpcEngine.java:114)
at 
org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:247)
at com.sun.proxy.$Proxy11.submitRequest(Unknown Source)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy11.submitRequest(Unknown Source)
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:282)
at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceList(OzoneManagerProtocolClientSideTranslatorPB.java:1095)
at 
org.apache.hadoop.ozone.client.rpc.RpcClient.getScmAddressForClient(RpcClient.java:217)
at 
org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:151)
... 23 more
Caused by: java.lang.NoSuchFieldException: versionID
at java.base/java.lang.Class.getField(Class.java:2000)
at org.apache.hadoop.ipc.RPC.getProtocolVersion(RPC.java:179)
... 40 more{code}

> Implement RetryProxy and FailoverProxy for OM client
> 
>
> Key: HDDS-1072
> URL: https://issues.apache.org/jira/browse/HDDS-1072
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-1072.001.patch, HDDS-1072.002.patch, 
> HDDS-1072.003.patch, HDDS-1072.004.patch, HDDS-1072.005.patch, 
> HDDS-1072.006.patch

[jira] [Commented] (HDFS-14317) Standby does not trigger edit log rolling when in-progress edit log tailing is enabled

2019-03-01 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782170#comment-16782170
 ] 

Chao Sun commented on HDFS-14317:
-

Regarding the time unit issue for log rolling and tailing, perhaps we should 
file a JIRA to fix this, since someone may want to use sub-second tailing 
frequency for standby reads, such as 100ms, but this right now will lose 
precision and be converted to 0ms.

> Standby does not trigger edit log rolling when in-progress edit log tailing 
> is enabled
> --
>
> Key: HDFS-14317
> URL: https://issues.apache.org/jira/browse/HDFS-14317
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Ekanth Sethuramalingam
>Assignee: Ekanth Sethuramalingam
>Priority: Critical
> Attachments: HDFS-14317.001.patch, HDFS-14317.002.patch, 
> HDFS-14317.003.patch, HDFS-14317.004.patch
>
>
> The standby uses the following method to check if it is time to trigger edit 
> log rolling on active.
> {code}
>   /**
>* @return true if the configured log roll period has elapsed.
>*/
>   private boolean tooLongSinceLastLoad() {
> return logRollPeriodMs >= 0 && 
>   (monotonicNow() - lastLoadTimeMs) > logRollPeriodMs ;
>   }
> {code}
> In doTailEdits(), lastLoadTimeMs is updated when standby is able to 
> successfully tail any edits
> {code}
>   if (editsLoaded > 0) {
> lastLoadTimeMs = monotonicNow();
>   }
> {code}
> The default configuration for {{dfs.ha.log-roll.period}} is 120 seconds and 
> {{dfs.ha.tail-edits.period}} is 60 seconds. With in-progress edit log tailing 
> enabled, tooLongSinceLastLoad() will almost never return true resulting in 
> edit logs not rolled for a long time until this configuration 
> {{dfs.namenode.edit.log.autoroll.multiplier.threshold}} takes effect.
> [In our deployment, this resulted in in-progress edit logs getting deleted. 
> The sequence of events is that standby was able to checkpoint twice while the 
> in-progress edit log was growing on active. When the 
> NNStorageRetentionManager decided to cleanup old checkpoints and edit logs, 
> it cleaned up the in-progress edit log from active and QJM (as the txnid on 
> in-progress edit log was older than the 2 most recent checkpoints) resulting 
> in irrecoverably losing a few minutes worth of metadata].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14326) Add CorruptFilesCount to JMX

2019-03-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782169#comment-16782169
 ] 

Hadoop QA commented on HDFS-14326:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  6s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
262 unchanged - 0 fixed = 263 total (was 262) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
48s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}196m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.qjournal.client.TestQJMWithFaults |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14326 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960813/HDFS-14326.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 675554c2a3fe 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDFS-14259) RBF: Fix safemode message for Router

2019-03-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782161#comment-16782161
 ] 

Hadoop QA commented on HDFS-14259:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 8s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m  9s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14259 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960833/HDFS-14259-HDFS-13891.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f16dda61402b 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / da99828 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26377/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26377/testReport/ |
| Max. process+thread count | 1371 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 

[jira] [Comment Edited] (HDFS-14321) Fix -Xcheck:jni issues in libhdfs, run ctest with -Xcheck:jni enabled

2019-03-01 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778763#comment-16778763
 ] 

Sahil Takiar edited comment on HDFS-14321 at 3/1/19 10:04 PM:
--

When running the existing tests with {{\-Xcheck:jni}} I only see one error: 
{{FATAL ERROR in native method: Invalid global JNI handle passed to 
DeleteGlobalRef}}, which seems to be caused by {{hadoopRzOptionsFree}} calling 
{{DeleteGlobalRef}} on {{opts->byteBufferPool}} which is not a global ref. It's 
not clear to me how big an issue this is, since {{opts->byteBufferPool}} should 
be a local ref that is automatically deleted when the native method exits.

There are a bunch of warnings of the form {{WARNING in native method: JNI call 
made without checking exceptions when required to from ...}} - after debugging 
these warnings, most of them seem to be caused by the JVM itself (e.g. internal 
JDK code). So they would have to fixed within the JDK itself.


was (Author: stakiar):
When running the existing tests with {{-Xcheck:jni}} I only see one error: 
{{FATAL ERROR in native method: Invalid global JNI handle passed to 
DeleteGlobalRef}}, which seems to be caused by {{hadoopRzOptionsFree}} calling 
{{DeleteGlobalRef}} on {{opts->byteBufferPool}} which is not a global ref. It's 
not clear to me how big an issue this is, since {{opts->byteBufferPool}} should 
be a local ref that is automatically deleted when the native method exits.

There are a bunch of warnings of the form {{WARNING in native method: JNI call 
made without checking exceptions when required to from ...}} - after debugging 
these warnings, most of them seem to be caused by the JVM itself (e.g. internal 
JDK code). So they would have to fixed within the JDK itself.

> Fix -Xcheck:jni issues in libhdfs, run ctest with -Xcheck:jni enabled
> -
>
> Key: HDFS-14321
> URL: https://issues.apache.org/jira/browse/HDFS-14321
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> The JVM exposes an option called {{-Xcheck:jni}} which runs various checks 
> against JNI usage by applications. Further explanation of this JVM option can 
> be found in: 
> [https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/clopts002.html]
>  and 
> [https://www.ibm.com/support/knowledgecenter/en/SSYKE2_8.0.0/com.ibm.java.vm.80.doc/docs/jni_debug.html].
>  When run with this option, the JVM will print out any warnings or errors it 
> encounters with the JNI.
> We should run the libhdfs tests with {{-Xcheck:jni}} (can be added to 
> {{LIBHDFS_OPTS}}) and fix any warnings / errors. We should add this option to 
> our ctest runs as well to ensure no regressions are introduced to libhdfs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-3246) pRead equivalent for direct read path

2019-03-01 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HDFS-3246:
---
Status: Open  (was: Patch Available)

> pRead equivalent for direct read path
> -
>
> Key: HDFS-3246
> URL: https://issues.apache.org/jira/browse/HDFS-3246
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, performance
>Affects Versions: 3.0.0-alpha1
>Reporter: Henry Robinson
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HDFS-3246.001.patch, HDFS-3246.002.patch, 
> HDFS-3246.003.patch, HDFS-3246.004.patch, HDFS-3246.005.patch
>
>
> There is no pread equivalent in ByteBufferReadable. We should consider adding 
> one. It would be relatively easy to implement for the distributed case 
> (certainly compared to HDFS-2834), since DFSInputStream does most of the 
> heavy lifting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14317) Standby does not trigger edit log rolling when in-progress edit log tailing is enabled

2019-03-01 Thread Ekanth Sethuramalingam (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782142#comment-16782142
 ] 

Ekanth Sethuramalingam commented on HDFS-14317:
---

New patch [^HDFS-14317.004.patch] fixes the 
{{hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits}} test failures.

> Standby does not trigger edit log rolling when in-progress edit log tailing 
> is enabled
> --
>
> Key: HDFS-14317
> URL: https://issues.apache.org/jira/browse/HDFS-14317
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Ekanth Sethuramalingam
>Assignee: Ekanth Sethuramalingam
>Priority: Critical
> Attachments: HDFS-14317.001.patch, HDFS-14317.002.patch, 
> HDFS-14317.003.patch, HDFS-14317.004.patch
>
>
> The standby uses the following method to check if it is time to trigger edit 
> log rolling on active.
> {code}
>   /**
>* @return true if the configured log roll period has elapsed.
>*/
>   private boolean tooLongSinceLastLoad() {
> return logRollPeriodMs >= 0 && 
>   (monotonicNow() - lastLoadTimeMs) > logRollPeriodMs ;
>   }
> {code}
> In doTailEdits(), lastLoadTimeMs is updated when standby is able to 
> successfully tail any edits
> {code}
>   if (editsLoaded > 0) {
> lastLoadTimeMs = monotonicNow();
>   }
> {code}
> The default configuration for {{dfs.ha.log-roll.period}} is 120 seconds and 
> {{dfs.ha.tail-edits.period}} is 60 seconds. With in-progress edit log tailing 
> enabled, tooLongSinceLastLoad() will almost never return true resulting in 
> edit logs not rolled for a long time until this configuration 
> {{dfs.namenode.edit.log.autoroll.multiplier.threshold}} takes effect.
> [In our deployment, this resulted in in-progress edit logs getting deleted. 
> The sequence of events is that standby was able to checkpoint twice while the 
> in-progress edit log was growing on active. When the 
> NNStorageRetentionManager decided to cleanup old checkpoints and edit logs, 
> it cleaned up the in-progress edit log from active and QJM (as the txnid on 
> in-progress edit log was older than the 2 most recent checkpoints) resulting 
> in irrecoverably losing a few minutes worth of metadata].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14317) Standby does not trigger edit log rolling when in-progress edit log tailing is enabled

2019-03-01 Thread Ekanth Sethuramalingam (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekanth Sethuramalingam updated HDFS-14317:
--
Attachment: HDFS-14317.004.patch

> Standby does not trigger edit log rolling when in-progress edit log tailing 
> is enabled
> --
>
> Key: HDFS-14317
> URL: https://issues.apache.org/jira/browse/HDFS-14317
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Ekanth Sethuramalingam
>Assignee: Ekanth Sethuramalingam
>Priority: Critical
> Attachments: HDFS-14317.001.patch, HDFS-14317.002.patch, 
> HDFS-14317.003.patch, HDFS-14317.004.patch
>
>
> The standby uses the following method to check if it is time to trigger edit 
> log rolling on active.
> {code}
>   /**
>* @return true if the configured log roll period has elapsed.
>*/
>   private boolean tooLongSinceLastLoad() {
> return logRollPeriodMs >= 0 && 
>   (monotonicNow() - lastLoadTimeMs) > logRollPeriodMs ;
>   }
> {code}
> In doTailEdits(), lastLoadTimeMs is updated when standby is able to 
> successfully tail any edits
> {code}
>   if (editsLoaded > 0) {
> lastLoadTimeMs = monotonicNow();
>   }
> {code}
> The default configuration for {{dfs.ha.log-roll.period}} is 120 seconds and 
> {{dfs.ha.tail-edits.period}} is 60 seconds. With in-progress edit log tailing 
> enabled, tooLongSinceLastLoad() will almost never return true resulting in 
> edit logs not rolled for a long time until this configuration 
> {{dfs.namenode.edit.log.autoroll.multiplier.threshold}} takes effect.
> [In our deployment, this resulted in in-progress edit logs getting deleted. 
> The sequence of events is that standby was able to checkpoint twice while the 
> in-progress edit log was growing on active. When the 
> NNStorageRetentionManager decided to cleanup old checkpoints and edit logs, 
> it cleaned up the in-progress edit log from active and QJM (as the txnid on 
> in-progress edit log was older than the 2 most recent checkpoints) resulting 
> in irrecoverably losing a few minutes worth of metadata].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-3246) pRead equivalent for direct read path

2019-03-01 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HDFS-3246:
---
Attachment: HDFS-3246.005.patch

> pRead equivalent for direct read path
> -
>
> Key: HDFS-3246
> URL: https://issues.apache.org/jira/browse/HDFS-3246
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, performance
>Affects Versions: 3.0.0-alpha1
>Reporter: Henry Robinson
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HDFS-3246.001.patch, HDFS-3246.002.patch, 
> HDFS-3246.003.patch, HDFS-3246.004.patch, HDFS-3246.005.patch
>
>
> There is no pread equivalent in ByteBufferReadable. We should consider adding 
> one. It would be relatively easy to implement for the distributed case 
> (certainly compared to HDFS-2834), since DFSInputStream does most of the 
> heavy lifting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14314) fullBlockReportLeaseId should be reset after registering to NN

2019-03-01 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782131#comment-16782131
 ] 

Wei-Chiu Chuang commented on HDFS-14314:


Sorry I was in a conference so missed the message. Added [~starphin] into the 
Contributor1 list.

> fullBlockReportLeaseId should be reset after registering to NN
> --
>
> Key: HDFS-14314
> URL: https://issues.apache.org/jira/browse/HDFS-14314
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.8.4
> Environment:  
>  
>  
>Reporter: star
>Assignee: star
>Priority: Critical
> Fix For: 2.8.4
>
> Attachments: HDFS-14314-trunk.001.patch, HDFS-14314-trunk.001.patch, 
> HDFS-14314-trunk.002.patch, HDFS-14314-trunk.003.patch, 
> HDFS-14314-trunk.004.patch, HDFS-14314-trunk.005.patch, HDFS-14314.0.patch, 
> HDFS-14314.2.patch, HDFS-14314.patch
>
>
>   since HDFS-7923 ,to rate-limit DN block report, DN will ask for a full 
> block lease id from active NN before sending full block to NN. Then DN will 
> send full block report together with lease id. If the lease id is invalid, NN 
> will reject the full block report and log "not in the pending set".
>   In a case when DN is doing full block reporting while NN is restarted. 
> It happens that DN will later send a full block report with lease id 
> ,acquired from previous NN instance, which is invalid to the new NN instance. 
> Though DN recognized the new NN instance by heartbeat and reregister itself, 
> it did not reset the lease id from previous instance.
>   The issuse may cause DNs to temporarily go dead, making it unsafe to 
> restart NN especially in hadoop cluster which has large amount of DNs. 
> HDFS-12914 reported the issue  without any clues why it occurred and remain 
> unsolved.
>    To make it clear, look at code below. We take it from method 
> offerService of class BPServiceActor. We eliminate some code to focus on 
> current issue. fullBlockReportLeaseId is a local variable to hold lease id 
> from NN. Exceptions will occur at blockReport call when NN restarting, which 
> will be caught by catch block in while loop. Thus fullBlockReportLeaseId will 
> not be set to 0. After NN restarted, DN will send full block report which 
> will be rejected by the new NN instance. DN will never send full block report 
> until the next full block report schedule, about an hour later.
>   Solution is simple, just reset fullBlockReportLeaseId to 0 after any 
> exception or after registering to NN. Thus it will ask for a valid 
> fullBlockReportLeaseId from new NN instance.
> {code:java}
> private void offerService() throws Exception {
>   long fullBlockReportLeaseId = 0;
>   //
>   // Now loop for a long time
>   //
>   while (shouldRun()) {
> try {
>   final long startTime = scheduler.monotonicNow();
>   //
>   // Every so often, send heartbeat or block-report
>   //
>   final boolean sendHeartbeat = scheduler.isHeartbeatDue(startTime);
>   HeartbeatResponse resp = null;
>   if (sendHeartbeat) {
>   
> boolean requestBlockReportLease = (fullBlockReportLeaseId == 0) &&
> scheduler.isBlockReportDue(startTime);
> scheduler.scheduleNextHeartbeat();
> if (!dn.areHeartbeatsDisabledForTests()) {
>   resp = sendHeartBeat(requestBlockReportLease);
>   assert resp != null;
>   if (resp.getFullBlockReportLeaseId() != 0) {
> if (fullBlockReportLeaseId != 0) {
>   LOG.warn(nnAddr + " sent back a full block report lease " +
>   "ID of 0x" +
>   Long.toHexString(resp.getFullBlockReportLeaseId()) +
>   ", but we already have a lease ID of 0x" +
>   Long.toHexString(fullBlockReportLeaseId) + ". " +
>   "Overwriting old lease ID.");
> }
> fullBlockReportLeaseId = resp.getFullBlockReportLeaseId();
>   }
>  
> }
>   }
>
>  
>   if ((fullBlockReportLeaseId != 0) || forceFullBr) {
> //Exception occurred here when NN restarting
> cmds = blockReport(fullBlockReportLeaseId);
> fullBlockReportLeaseId = 0;
>   }
>   
> } catch(RemoteException re) {
>   
>   } // while (shouldRun())
> } // offerService{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-3246) pRead equivalent for direct read path

2019-03-01 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HDFS-3246:
---
Status: Patch Available  (was: Open)

> pRead equivalent for direct read path
> -
>
> Key: HDFS-3246
> URL: https://issues.apache.org/jira/browse/HDFS-3246
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, performance
>Affects Versions: 3.0.0-alpha1
>Reporter: Henry Robinson
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HDFS-3246.001.patch, HDFS-3246.002.patch, 
> HDFS-3246.003.patch, HDFS-3246.004.patch, HDFS-3246.005.patch
>
>
> There is no pread equivalent in ByteBufferReadable. We should consider adding 
> one. It would be relatively easy to implement for the distributed case 
> (certainly compared to HDFS-2834), since DFSInputStream does most of the 
> heavy lifting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14314) fullBlockReportLeaseId should be reset after registering to NN

2019-03-01 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HDFS-14314:
--

Assignee: star

> fullBlockReportLeaseId should be reset after registering to NN
> --
>
> Key: HDFS-14314
> URL: https://issues.apache.org/jira/browse/HDFS-14314
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.8.4
> Environment:  
>  
>  
>Reporter: star
>Assignee: star
>Priority: Critical
> Fix For: 2.8.4
>
> Attachments: HDFS-14314-trunk.001.patch, HDFS-14314-trunk.001.patch, 
> HDFS-14314-trunk.002.patch, HDFS-14314-trunk.003.patch, 
> HDFS-14314-trunk.004.patch, HDFS-14314-trunk.005.patch, HDFS-14314.0.patch, 
> HDFS-14314.2.patch, HDFS-14314.patch
>
>
>   since HDFS-7923 ,to rate-limit DN block report, DN will ask for a full 
> block lease id from active NN before sending full block to NN. Then DN will 
> send full block report together with lease id. If the lease id is invalid, NN 
> will reject the full block report and log "not in the pending set".
>   In a case when DN is doing full block reporting while NN is restarted. 
> It happens that DN will later send a full block report with lease id 
> ,acquired from previous NN instance, which is invalid to the new NN instance. 
> Though DN recognized the new NN instance by heartbeat and reregister itself, 
> it did not reset the lease id from previous instance.
>   The issuse may cause DNs to temporarily go dead, making it unsafe to 
> restart NN especially in hadoop cluster which has large amount of DNs. 
> HDFS-12914 reported the issue  without any clues why it occurred and remain 
> unsolved.
>    To make it clear, look at code below. We take it from method 
> offerService of class BPServiceActor. We eliminate some code to focus on 
> current issue. fullBlockReportLeaseId is a local variable to hold lease id 
> from NN. Exceptions will occur at blockReport call when NN restarting, which 
> will be caught by catch block in while loop. Thus fullBlockReportLeaseId will 
> not be set to 0. After NN restarted, DN will send full block report which 
> will be rejected by the new NN instance. DN will never send full block report 
> until the next full block report schedule, about an hour later.
>   Solution is simple, just reset fullBlockReportLeaseId to 0 after any 
> exception or after registering to NN. Thus it will ask for a valid 
> fullBlockReportLeaseId from new NN instance.
> {code:java}
> private void offerService() throws Exception {
>   long fullBlockReportLeaseId = 0;
>   //
>   // Now loop for a long time
>   //
>   while (shouldRun()) {
> try {
>   final long startTime = scheduler.monotonicNow();
>   //
>   // Every so often, send heartbeat or block-report
>   //
>   final boolean sendHeartbeat = scheduler.isHeartbeatDue(startTime);
>   HeartbeatResponse resp = null;
>   if (sendHeartbeat) {
>   
> boolean requestBlockReportLease = (fullBlockReportLeaseId == 0) &&
> scheduler.isBlockReportDue(startTime);
> scheduler.scheduleNextHeartbeat();
> if (!dn.areHeartbeatsDisabledForTests()) {
>   resp = sendHeartBeat(requestBlockReportLease);
>   assert resp != null;
>   if (resp.getFullBlockReportLeaseId() != 0) {
> if (fullBlockReportLeaseId != 0) {
>   LOG.warn(nnAddr + " sent back a full block report lease " +
>   "ID of 0x" +
>   Long.toHexString(resp.getFullBlockReportLeaseId()) +
>   ", but we already have a lease ID of 0x" +
>   Long.toHexString(fullBlockReportLeaseId) + ". " +
>   "Overwriting old lease ID.");
> }
> fullBlockReportLeaseId = resp.getFullBlockReportLeaseId();
>   }
>  
> }
>   }
>
>  
>   if ((fullBlockReportLeaseId != 0) || forceFullBr) {
> //Exception occurred here when NN restarting
> cmds = blockReport(fullBlockReportLeaseId);
> fullBlockReportLeaseId = 0;
>   }
>   
> } catch(RemoteException re) {
>   
>   } // while (shouldRun())
> } // offerService{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14323) Distcp fails in Hadoop 3.x when 2.x source webhdfs url has special characters in hdfs file path

2019-03-01 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782121#comment-16782121
 ] 

Wei-Chiu Chuang commented on HDFS-14323:


Doesn't reproduce for me, although I use CDH6 (Hadoop 3.0)+CDH5 (Hadoop 2.6) in 
my setup. Sounds like a problem at the remote cluster side (Hadoop 2), not on 
the local cluster side (Hadoop 3). So I suspect it's a fixed bug.

 

Are you able to access the file directly from the Hadoop 3 cluster? e.g. hdfs 
dfs -ls webhdfs://c2265-node2.hwx.com:50070/tmp/date=1234557

> Distcp fails in Hadoop 3.x when 2.x source webhdfs url has special characters 
> in hdfs file path
> ---
>
> Key: HDFS-14323
> URL: https://issues.apache.org/jira/browse/HDFS-14323
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0
>Reporter: Srinivasu Majeti
>Priority: Major
> Attachments: HDFS-14323v0.patch
>
>
> There was an enhancement to allow semicolon in source/target URLs for distcp 
> use case as part of HDFS-13176 and backward compatibility fix as part of 
> HDFS-13582 . Still there seems to be an issue when trying to trigger distcp 
> from 3.x cluster to pull webhdfs data from 2.x hadoop cluster. We might need 
> to deal with existing fix as described below by making sure if url is already 
> encoded or not. That fixes it. 
> diff --git 
> a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
>  
> b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
> index 5936603c34a..dc790286aff 100644
> --- 
> a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
> +++ 
> b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
> @@ -609,7 +609,10 @@ URL toUrl(final HttpOpParam.Op op, final Path fspath,
>  boolean pathAlreadyEncoded = false;
>  try {
>  fspathUriDecoded = URLDecoder.decode(fspathUri.getPath(), "UTF-8");
> - pathAlreadyEncoded = true;
> + if(!fspathUri.getPath().equals(fspathUriDecoded))
> + {
> + pathAlreadyEncoded = true;
> + }
>  } catch (IllegalArgumentException ex) {
>  LOG.trace("Cannot decode URL encoded file", ex);
>  }
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1093) Configuration tab in OM/SCM ui is not displaying the correct values

2019-03-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1093?focusedWorklogId=206636=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-206636
 ]

ASF GitHub Bot logged work on HDDS-1093:


Author: ASF GitHub Bot
Created on: 01/Mar/19 21:44
Start Date: 01/Mar/19 21:44
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #527: HDDS-1093. 
Configuration tab in OM/SCM ui is not displaying the correct values
URL: https://github.com/apache/hadoop/pull/527#issuecomment-468822465
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 54 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1212 | trunk passed |
   | +1 | compile | 89 | trunk passed |
   | +1 | checkstyle | 35 | trunk passed |
   | +1 | mvnsite | 75 | trunk passed |
   | +1 | shadedclient | 802 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 100 | trunk passed |
   | +1 | javadoc | 57 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for patch |
   | +1 | mvninstall | 66 | the patch passed |
   | -1 | jshint | 83 | The patch generated 294 new + 1942 unchanged - 1053 
fixed = 2236 total (was 2995) |
   | +1 | compile | 69 | the patch passed |
   | +1 | javac | 69 | the patch passed |
   | +1 | checkstyle | 24 | the patch passed |
   | +1 | mvnsite | 54 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 759 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 110 | the patch passed |
   | +1 | javadoc | 53 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 87 | common in the patch failed. |
   | +1 | unit | 31 | framework in the patch passed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 3867 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/527 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  jshint  |
   | uname | Linux 7544400e110b 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8b72aea |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | jshint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/3/artifact/out/diff-patch-jshint.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/3/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/3/testReport/ |
   | Max. process+thread count | 307 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/framework U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-527/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 206636)
Time Spent: 2.5h  (was: 2h 20m)

> Configuration tab in OM/SCM ui is not displaying the correct values
> ---
>
> Key: HDDS-1093
> URL: https://issues.apache.org/jira/browse/HDDS-1093
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager, SCM
>Reporter: Sandeep Nemuri
>Assignee: Vivek Ratnavel Subramanian
>Priority: Critical
>  Labels: pull-request-available
> Attachments: image-2019-02-12-19-47-18-332.png
>
>  

[jira] [Commented] (HDFS-3246) pRead equivalent for direct read path

2019-03-01 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782101#comment-16782101
 ] 

Sahil Takiar commented on HDFS-3246:


[~anoop.hbase] {quote} When calling this API with buf remaining size of n and 
the file is having data size > n after given position, is it guaranteed to read 
the whole n bytes into BB in one go? Just wanted to confirm. Thanks. {quote}

Unfortunately, the existing APIs aren't clear on this behavior. The 
{{ByteBufferPositionedReadable}} interface is meant to follow the same 
semantics as {{PositionedReadable}} and {{ByteBufferReadable}}. 
{{PositionedReadable}} says it "Read[s] up to the specified number of bytes" 
and {{ByteBufferReadable}} says it "Reads up to buf.remaining() bytes". In 
practice, it looks like pread in {{DFSInputStream}} follows the behavior you 
have described, e.g. it either reads until {{ByteBuffer#hasRemaining()}} 
returns false, or there are no more bytes in the file. 
{{ByteBufferPositionedReadable}} should follow the same behavior for 
{{DFSInputStream}}.

[~jojochuang] thanks for the review comments. I've got this implemented for 
{{CryptoInputStream}} as well and will post a patch soon.

As far as testing goes. I've tested the libhdfs path via Impala on a real 
cluster and everything seemed to be working as expected (have not tested 
against an encrypted HDFS cluster).

Will post an updated patch shortly.

> pRead equivalent for direct read path
> -
>
> Key: HDFS-3246
> URL: https://issues.apache.org/jira/browse/HDFS-3246
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, performance
>Affects Versions: 3.0.0-alpha1
>Reporter: Henry Robinson
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HDFS-3246.001.patch, HDFS-3246.002.patch, 
> HDFS-3246.003.patch, HDFS-3246.004.patch
>
>
> There is no pread equivalent in ByteBufferReadable. We should consider adding 
> one. It would be relatively easy to implement for the distributed case 
> (certainly compared to HDFS-2834), since DFSInputStream does most of the 
> heavy lifting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14328) [Clean-up] Remove NULL check before instanceof in TestGSet

2019-03-01 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782088#comment-16782088
 ] 

Shweta commented on HDFS-14328:
---

Failed test testGracefulFailoverMultipleZKfcs() in TestZKFailoverController 
passes locally on my machine.

> [Clean-up] Remove NULL check before instanceof in TestGSet
> --
>
> Key: HDFS-14328
> URL: https://issues.apache.org/jira/browse/HDFS-14328
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HDFS-14328.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14272) [SBN read] ObserverReadProxyProvider should sync with active txnID on startup

2019-03-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782078#comment-16782078
 ] 

Hudson commented on HDFS-14272:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16107 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16107/])
HDFS-14272. [SBN read] Make ObserverReadProxyProvider initialize its (xkrogen: 
rev 5a15f7b3f47e61905bf41b40cf5243ab96bd3448)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestConsistentReadsObserver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java


> [SBN read] ObserverReadProxyProvider should sync with active txnID on startup
> -
>
> Key: HDFS-14272
> URL: https://issues.apache.org/jira/browse/HDFS-14272
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
> Environment: CDH6.1 (Hadoop 3.0.x) + Consistency Reads from Standby + 
> SSL + Kerberos + RPC encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14272.000.patch, HDFS-14272.001.patch, 
> HDFS-14272.002.patch
>
>
> It is typical for integration tests to create some files and then check their 
> existence. For example, like the following simple bash script:
> {code:java}
> # hdfs dfs -touchz /tmp/abc
> # hdfs dfs -ls /tmp/abc
> {code}
> The test executes HDFS bash command sequentially, but it may fail with 
> Consistent Standby Read because the -ls does not find the file.
> Analysis: the second bash command, while launched sequentially after the 
> first one, is not aware of the state id returned from the first bash command. 
> So ObserverNode wouldn't wait for the the edits to get propagated, and thus 
> fails.
> I've got a cluster where the Observer has tens of seconds of RPC latency, and 
> this becomes very annoying. (I am still trying to figure out why this 
> Observer has such a long RPC latency. But that's another story.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14259) RBF: Fix safemode message for Router

2019-03-01 Thread Ranith Sardar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranith Sardar updated HDFS-14259:
-
Attachment: HDFS-14259-HDFS-13891.002.patch

> RBF: Fix safemode message for Router
> 
>
> Key: HDFS-14259
> URL: https://issues.apache.org/jira/browse/HDFS-14259
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14259-HDFS-13891.000.patch, 
> HDFS-14259-HDFS-13891.001.patch, HDFS-14259-HDFS-13891.002.patch
>
>
> Currently, the {{getSafemode()}} bean checks the state of the Router but 
> returns the error if the status is different than SAFEMODE:
> {code}
>   public String getSafemode() {
>   if (!getRouter().isRouterState(RouterServiceState.SAFEMODE)) {
> return "Safe mode is ON. " + this.getSafeModeTip();
>   }
> } catch (IOException e) {
>   return "Failed to get safemode status. Please check router"
>   + "log for more detail.";
> }
> return "";
>   }
> {code}
> The condition should be reversed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1136) Add metric counters to capture the RocksDB checkpointing statistics.

2019-03-01 Thread Aravindan Vijayan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782063#comment-16782063
 ] 

Aravindan Vijayan edited comment on HDDS-1136 at 3/1/19 9:04 PM:
-

Added metric gauges for tracking DB checkpointing statistics. The OMMetrics 
class will hold these gauge values at any instant. We dont store historic 
values for these metrics. These can be pulled from OM by a sink (File sink, 
Prometheus sink, Recon etc).

*Testing done*
Integration test for Servlet method that gets the OM DB checkpoint added.
Manually verified the patch on single node Ozone cluster.
Github PR returned clean run. 


was (Author: avijayan):
Added metric gauges for tracking DB checkpointing statistics. The OMMetrics 
class will hold these gauge values at any instant. We dont store historic 
values for these metrics. These can be pulled from OM by a sink (File sink, 
Prometheus sink, Recon etc).

*Testing done*
Integration test for Servlet method that gets the OM DB checkpoint added.
Manually verified the patch on single node Ozone cluster.

> Add metric counters to capture the RocksDB checkpointing statistics.
> 
>
> Key: HDDS-1136
> URL: https://issues.apache.org/jira/browse/HDDS-1136
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1136-000.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> As per the discussion with [~anu] on HDDS-1085, this JIRA tracks the effort 
> to add metric counters to capture ROcksDB checkpointing performance. 
> From [~anu]'s comments, it might be interesting to have 3 counters – or a map 
> of counters.
> * How much time are we taking for each CheckPoint
> * How much time are we taking for each Tar operation – along with sizes
> * How much time are we taking for the transfer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13972) RBF: Support for Delegation Token (WebHDFS)

2019-03-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782065#comment-16782065
 ] 

Hadoop QA commented on HDFS-13972:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
37s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 
44s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13972 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960819/HDFS-13972-HDFS-13891.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7438ee7bd214 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / da99828 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26375/testReport/ |
| Max. process+thread count | 996 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26375/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was 

[jira] [Updated] (HDFS-14272) [SBN read] ObserverReadProxyProvider should sync with active txnID on startup

2019-03-01 Thread Erik Krogen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-14272:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> [SBN read] ObserverReadProxyProvider should sync with active txnID on startup
> -
>
> Key: HDFS-14272
> URL: https://issues.apache.org/jira/browse/HDFS-14272
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
> Environment: CDH6.1 (Hadoop 3.0.x) + Consistency Reads from Standby + 
> SSL + Kerberos + RPC encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14272.000.patch, HDFS-14272.001.patch, 
> HDFS-14272.002.patch
>
>
> It is typical for integration tests to create some files and then check their 
> existence. For example, like the following simple bash script:
> {code:java}
> # hdfs dfs -touchz /tmp/abc
> # hdfs dfs -ls /tmp/abc
> {code}
> The test executes HDFS bash command sequentially, but it may fail with 
> Consistent Standby Read because the -ls does not find the file.
> Analysis: the second bash command, while launched sequentially after the 
> first one, is not aware of the state id returned from the first bash command. 
> So ObserverNode wouldn't wait for the the edits to get propagated, and thus 
> fails.
> I've got a cluster where the Observer has tens of seconds of RPC latency, and 
> this becomes very annoying. (I am still trying to figure out why this 
> Observer has such a long RPC latency. But that's another story.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14272) [SBN read] ObserverReadProxyProvider should sync with active txnID on startup

2019-03-01 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782064#comment-16782064
 ] 

Erik Krogen commented on HDFS-14272:


Based on the +1 from [~shv] and the earlier review from [~csun], I just 
committed this to trunk. Thanks all!

> [SBN read] ObserverReadProxyProvider should sync with active txnID on startup
> -
>
> Key: HDFS-14272
> URL: https://issues.apache.org/jira/browse/HDFS-14272
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
> Environment: CDH6.1 (Hadoop 3.0.x) + Consistency Reads from Standby + 
> SSL + Kerberos + RPC encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-14272.000.patch, HDFS-14272.001.patch, 
> HDFS-14272.002.patch
>
>
> It is typical for integration tests to create some files and then check their 
> existence. For example, like the following simple bash script:
> {code:java}
> # hdfs dfs -touchz /tmp/abc
> # hdfs dfs -ls /tmp/abc
> {code}
> The test executes HDFS bash command sequentially, but it may fail with 
> Consistent Standby Read because the -ls does not find the file.
> Analysis: the second bash command, while launched sequentially after the 
> first one, is not aware of the state id returned from the first bash command. 
> So ObserverNode wouldn't wait for the the edits to get propagated, and thus 
> fails.
> I've got a cluster where the Observer has tens of seconds of RPC latency, and 
> this becomes very annoying. (I am still trying to figure out why this 
> Observer has such a long RPC latency. But that's another story.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1136) Add metric counters to capture the RocksDB checkpointing statistics.

2019-03-01 Thread Aravindan Vijayan (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782063#comment-16782063
 ] 

Aravindan Vijayan commented on HDDS-1136:
-

Added metric gauges for tracking DB checkpointing statistics. The OMMetrics 
class will hold these gauge values at any instant. We dont store historic 
values for these metrics. These can be pulled from OM by a sink (File sink, 
Prometheus sink, Recon etc).

*Testing done*
Integration test for Servlet method that gets the OM DB checkpoint added.
Manually verified the patch on single node Ozone cluster.

> Add metric counters to capture the RocksDB checkpointing statistics.
> 
>
> Key: HDDS-1136
> URL: https://issues.apache.org/jira/browse/HDDS-1136
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1136-000.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> As per the discussion with [~anu] on HDDS-1085, this JIRA tracks the effort 
> to add metric counters to capture ROcksDB checkpointing performance. 
> From [~anu]'s comments, it might be interesting to have 3 counters – or a map 
> of counters.
> * How much time are we taking for each CheckPoint
> * How much time are we taking for each Tar operation – along with sizes
> * How much time are we taking for the transfer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1136) Add metric counters to capture the RocksDB checkpointing statistics.

2019-03-01 Thread Aravindan Vijayan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-1136:

Attachment: HDDS-1136-000.patch

> Add metric counters to capture the RocksDB checkpointing statistics.
> 
>
> Key: HDDS-1136
> URL: https://issues.apache.org/jira/browse/HDDS-1136
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1136-000.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> As per the discussion with [~anu] on HDDS-1085, this JIRA tracks the effort 
> to add metric counters to capture ROcksDB checkpointing performance. 
> From [~anu]'s comments, it might be interesting to have 3 counters – or a map 
> of counters.
> * How much time are we taking for each CheckPoint
> * How much time are we taking for each Tar operation – along with sizes
> * How much time are we taking for the transfer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14328) [Clean-up] Remove NULL check before instanceof in TestGSet

2019-03-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782055#comment-16782055
 ] 

Hadoop QA commented on HDFS-14328:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 37 unchanged - 1 fixed = 37 total (was 38) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 40s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14328 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960811/HDFS-14328.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c2cbab2a020f 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cab8529 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26374/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26374/testReport/ |
| Max. process+thread count | 1581 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 

[jira] [Commented] (HDDS-1208) ContainerStateMachine should set chunk data as state machine data for ratis

2019-03-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782052#comment-16782052
 ] 

Hadoop QA commented on HDDS-1208:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
59s{color} | {color:red} hadoop-hdds/container-service generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/container-service |
|  |  Dead store to dataContainerCommandProto in 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.startTransaction(RaftClientRequest)
  At 
ContainerStateMachine.java:org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.startTransaction(RaftClientRequest)
  At ContainerStateMachine.java:[line 255] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1208 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960818/HDDS-1208.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 174c1873142e 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / cab8529 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 

[jira] [Work started] (HDDS-807) Period should be an invalid character in bucket names

2019-03-01 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-807 started by Siddharth Wagle.

> Period should be an invalid character in bucket names
> -
>
> Key: HDDS-807
> URL: https://issues.apache.org/jira/browse/HDDS-807
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Critical
>  Labels: newbie
>
> ozonefs paths use the following syntax:
> - o3fs://bucket.volume/..
> The OM host and port are read from configuration. We cannot specify a target 
> filesystem with a fully qualified path. E.g. 
> _o3fs://bucket.volume.om-host.example.com:9862/. Hence we cannot hand a fully 
> qualified URL with OM hostname to a client without setting up config files 
> beforehand. This is inconvenient. It also means there is no way to perform a 
> distcp from one Ozone cluster to another.
> We need a way to support fully qualified paths with OM hostname and port 
> _bucket.volume.om-host.example.com_. If we allow periods in bucketnames, then 
> such fully qualified paths cannot be parsed unambiguously. However if er 
> disallow periods, then we can support all of the following paths 
> unambiguously.
>  # *o3fs://bucket.volume/key* - The authority has only two period-separated 
> components. These must be bucket and volume name respectively.
>  # *o3fs://bucket.volume.om-host.example.com/key* - The authority has more 
> than two components. The first two must be bucket and volume, the rest must 
> be the hostname.
>  # *o3fs://bucket.volume.om-host.example.com:5678/key* - Similar to #2, 
> except with a port number.
>  
> Open question is around HA support. I believe for HA we will have to 
> introduce the notion of a _nameservice_, similar to HDFS nameservice. This 
> will allow a fourth kind of Ozone URL:
>  - *o3fs://bucket.volume.ns1/key* - How do we distinguish this from #3 above? 
> One way could be to find if _ns1_ is known as an Ozone nameservice via 
> configuration. If so then treat it as the name of an HA service. Else treat 
> it as a hostname.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14317) Standby does not trigger edit log rolling when in-progress edit log tailing is enabled

2019-03-01 Thread Ekanth Sethuramalingam (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782040#comment-16782040
 ] 

Ekanth Sethuramalingam commented on HDFS-14317:
---

Thanks [~xkrogen] for the review. New patch [^HDFS-14317.003.patch] after 
addressing all checkstyle issues.

> Standby does not trigger edit log rolling when in-progress edit log tailing 
> is enabled
> --
>
> Key: HDFS-14317
> URL: https://issues.apache.org/jira/browse/HDFS-14317
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Ekanth Sethuramalingam
>Assignee: Ekanth Sethuramalingam
>Priority: Critical
> Attachments: HDFS-14317.001.patch, HDFS-14317.002.patch, 
> HDFS-14317.003.patch
>
>
> The standby uses the following method to check if it is time to trigger edit 
> log rolling on active.
> {code}
>   /**
>* @return true if the configured log roll period has elapsed.
>*/
>   private boolean tooLongSinceLastLoad() {
> return logRollPeriodMs >= 0 && 
>   (monotonicNow() - lastLoadTimeMs) > logRollPeriodMs ;
>   }
> {code}
> In doTailEdits(), lastLoadTimeMs is updated when standby is able to 
> successfully tail any edits
> {code}
>   if (editsLoaded > 0) {
> lastLoadTimeMs = monotonicNow();
>   }
> {code}
> The default configuration for {{dfs.ha.log-roll.period}} is 120 seconds and 
> {{dfs.ha.tail-edits.period}} is 60 seconds. With in-progress edit log tailing 
> enabled, tooLongSinceLastLoad() will almost never return true resulting in 
> edit logs not rolled for a long time until this configuration 
> {{dfs.namenode.edit.log.autoroll.multiplier.threshold}} takes effect.
> [In our deployment, this resulted in in-progress edit logs getting deleted. 
> The sequence of events is that standby was able to checkpoint twice while the 
> in-progress edit log was growing on active. When the 
> NNStorageRetentionManager decided to cleanup old checkpoints and edit logs, 
> it cleaned up the in-progress edit log from active and QJM (as the txnid on 
> in-progress edit log was older than the 2 most recent checkpoints) resulting 
> in irrecoverably losing a few minutes worth of metadata].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14317) Standby does not trigger edit log rolling when in-progress edit log tailing is enabled

2019-03-01 Thread Ekanth Sethuramalingam (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekanth Sethuramalingam updated HDFS-14317:
--
Attachment: HDFS-14317.003.patch

> Standby does not trigger edit log rolling when in-progress edit log tailing 
> is enabled
> --
>
> Key: HDFS-14317
> URL: https://issues.apache.org/jira/browse/HDFS-14317
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Ekanth Sethuramalingam
>Assignee: Ekanth Sethuramalingam
>Priority: Critical
> Attachments: HDFS-14317.001.patch, HDFS-14317.002.patch, 
> HDFS-14317.003.patch
>
>
> The standby uses the following method to check if it is time to trigger edit 
> log rolling on active.
> {code}
>   /**
>* @return true if the configured log roll period has elapsed.
>*/
>   private boolean tooLongSinceLastLoad() {
> return logRollPeriodMs >= 0 && 
>   (monotonicNow() - lastLoadTimeMs) > logRollPeriodMs ;
>   }
> {code}
> In doTailEdits(), lastLoadTimeMs is updated when standby is able to 
> successfully tail any edits
> {code}
>   if (editsLoaded > 0) {
> lastLoadTimeMs = monotonicNow();
>   }
> {code}
> The default configuration for {{dfs.ha.log-roll.period}} is 120 seconds and 
> {{dfs.ha.tail-edits.period}} is 60 seconds. With in-progress edit log tailing 
> enabled, tooLongSinceLastLoad() will almost never return true resulting in 
> edit logs not rolled for a long time until this configuration 
> {{dfs.namenode.edit.log.autoroll.multiplier.threshold}} takes effect.
> [In our deployment, this resulted in in-progress edit logs getting deleted. 
> The sequence of events is that standby was able to checkpoint twice while the 
> in-progress edit log was growing on active. When the 
> NNStorageRetentionManager decided to cleanup old checkpoints and edit logs, 
> it cleaned up the in-progress edit log from active and QJM (as the txnid on 
> in-progress edit log was older than the 2 most recent checkpoints) resulting 
> in irrecoverably losing a few minutes worth of metadata].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1204) Fix ClassNotFound issue with javax.xml.bind.DatatypeConverter used by DefaultProfile

2019-03-01 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1204:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~xyao] thanks fixing this.

> Fix ClassNotFound issue with javax.xml.bind.DatatypeConverter used by 
> DefaultProfile
> 
>
> Key: HDDS-1204
> URL: https://issues.apache.org/jira/browse/HDDS-1204
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1204.001.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The ozonesecure docker-compose has been changed to use hadoop-runner image 
> based on Java 11. Several packages/classes have been removed from Java 8 such 
> as 
> javax.xml.bind.DatatypeConverter.parseHexBinary
>  This ticket is opened to fix issues running ozonesecure docker-compose on 
> java 11.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1173) Fix a data corruption bug in BlockOutputStream

2019-03-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782031#comment-16782031
 ] 

Hadoop QA commented on HDDS-1173:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
28s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 16m 
58s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 18m 
16s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 18m 16s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 57s{color} | {color:orange} root: The patch generated 4 new + 0 unchanged - 
0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-hdds/client generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 32s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 36s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/client |
|  |  Dead store to pos in 

[jira] [Work logged] (HDDS-1211) Test SCMChillMode failing randomly in Jenkins run

2019-03-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1211?focusedWorklogId=206617=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-206617
 ]

ASF GitHub Bot logged work on HDDS-1211:


Author: ASF GitHub Bot
Created on: 01/Mar/19 20:13
Start Date: 01/Mar/19 20:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #543: HDDS-1211. Test 
SCMChillMode failing randomly in Jenkins run
URL: https://github.com/apache/hadoop/pull/543#issuecomment-468796851
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1000 | trunk passed |
   | -1 | compile | 52 | integration-test in trunk failed. |
   | +1 | checkstyle | 21 | trunk passed |
   | -1 | mvnsite | 26 | integration-test in trunk failed. |
   | +1 | shadedclient | 666 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 0 | trunk passed |
   | +1 | javadoc | 16 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 21 | integration-test in the patch failed. |
   | -1 | compile | 21 | integration-test in the patch failed. |
   | -1 | javac | 21 | integration-test in the patch failed. |
   | -0 | checkstyle | 15 | hadoop-ozone/integration-test: The patch generated 
2 new + 0 unchanged - 0 fixed = 2 total (was 0) |
   | -1 | mvnsite | 20 | integration-test in the patch failed. |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedclient | 714 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 0 | the patch passed |
   | +1 | javadoc | 14 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 25 | integration-test in the patch failed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 2747 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/543 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 64ea92566487 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / cab8529 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/artifact/out/branch-compile-hadoop-ozone_integration-test.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/artifact/out/branch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/artifact/out/patch-mvninstall-hadoop-ozone_integration-test.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/artifact/out/patch-compile-hadoop-ozone_integration-test.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/artifact/out/patch-compile-hadoop-ozone_integration-test.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/artifact/out/diff-checkstyle-hadoop-ozone_integration-test.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/artifact/out/patch-mvnsite-hadoop-ozone_integration-test.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/testReport/ |
   | Max. process+thread count | 413 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-543/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the 

[jira] [Commented] (HDDS-1193) Refactor ContainerChillModeRule and DatanodeChillMode rule

2019-03-01 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782025#comment-16782025
 ] 

Ajay Kumar commented on HDDS-1193:
--

[~bharatviswa] thanks for the patch,added few comments on PR.

> Refactor ContainerChillModeRule and DatanodeChillMode rule
> --
>
> Key: HDDS-1193
> URL: https://issues.apache.org/jira/browse/HDDS-1193
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The main intention of this Jira is to have all rules look in a similar way of 
> handling events.
> In this Jira, did the following changes:
>  # Both DatanodeRule and ContainerRule implements EventHandler and listen for 
> NodeRegistrationContainerReport
>  # Update ScmChillModeManager not to handle any events. (As each rule need to 
> handle an event, and work on that rule)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1211) Test SCMChillMode failing randomly in Jenkins run

2019-03-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1211:
-
Issue Type: Sub-task  (was: Bug)
Parent: HDDS-1127

> Test SCMChillMode failing randomly in Jenkins run
> -
>
> Key: HDDS-1211
> URL: https://issues.apache.org/jira/browse/HDDS-1211
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) 
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
>  at 
> java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
>  at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748) at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:389) at 
> org.apache.hadoop.ozone.om.TestScmChillMode.testSCMChillMode(TestScmChillMode.java:286)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1208) ContainerStateMachine should set chunk data as state machine data for ratis

2019-03-01 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-1208:
--
Attachment: HDDS-1208.001.patch

> ContainerStateMachine should set chunk data as state machine data for ratis
> ---
>
> Key: HDDS-1208
> URL: https://issues.apache.org/jira/browse/HDDS-1208
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-1208.001.patch
>
>
> Currently ContainerStateMachine sets ContainerCommandRequestProto as state 
> machine data. This requires converting the ContainerCommandRequestProto to a 
> bytestring which leads to redundant buffer copy in case of write chunk 
> request. This can be avoided by setting the chunk data as the state machine 
> data for a log entry in ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1204) Fix ClassNotFound issue with javax.xml.bind.DatatypeConverter used by DefaultProfile

2019-03-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1204?focusedWorklogId=206613=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-206613
 ]

ASF GitHub Bot logged work on HDDS-1204:


Author: ASF GitHub Bot
Created on: 01/Mar/19 19:42
Start Date: 01/Mar/19 19:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #542: HDDS-1204. Fix 
ClassNotFound issue with javax.xml.bind.DatatypeConver…
URL: https://github.com/apache/hadoop/pull/542#issuecomment-468787402
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 975 | trunk passed |
   | +1 | compile | 41 | trunk passed |
   | +1 | checkstyle | 23 | trunk passed |
   | +1 | mvnsite | 38 | trunk passed |
   | +1 | shadedclient | 692 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 68 | trunk passed |
   | +1 | javadoc | 37 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 37 | the patch passed |
   | +1 | compile | 30 | the patch passed |
   | +1 | javac | 30 | the patch passed |
   | +1 | checkstyle | 13 | the patch passed |
   | +1 | mvnsite | 32 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 740 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 73 | the patch passed |
   | +1 | javadoc | 32 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 61 | common in the patch failed. |
   | +1 | asflicense | 23 | The patch does not generate ASF License warnings. |
   | | | 3011 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-542/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/542 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 9019030d0e84 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / de1dae6 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-542/1/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-542/1/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-542/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 206613)
Time Spent: 20m  (was: 10m)

> Fix ClassNotFound issue with javax.xml.bind.DatatypeConverter used by 
> DefaultProfile
> 
>
> Key: HDDS-1204
> URL: https://issues.apache.org/jira/browse/HDDS-1204
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1204.001.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The ozonesecure docker-compose has been changed to use hadoop-runner image 
> based on Java 11. Several packages/classes have been removed from Java 8 such 
> as 
> javax.xml.bind.DatatypeConverter.parseHexBinary
>  This ticket is opened to fix issues running ozonesecure docker-compose on 
> 

[jira] [Commented] (HDFS-13972) RBF: Support for Delegation Token (WebHDFS)

2019-03-01 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16782009#comment-16782009
 ] 

CR Hota commented on HDFS-13972:


[~elgoiri] [~brahmareddy] 

Thanks for the previous reviews.

Can you help me apply this patch (HDFS-13972-HDFS-13891.003.patch) and test the 
web functionalities? I have verified the changes in our env.

I am still figuring out how and what to wire as part of unit tests. Seems we 
did not add any unit tests wrt kerberos negotiate web part in the kerberos 
patch. Any pointers on what are the good areas to test in this? I found this 
class TestWebHdfsTokens in the namenode side. Should we do something similar?

> RBF: Support for Delegation Token (WebHDFS)
> ---
>
> Key: HDFS-13972
> URL: https://issues.apache.org/jira/browse/HDFS-13972
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13972-HDFS-13891.001.patch, 
> HDFS-13972-HDFS-13891.002.patch, HDFS-13972-HDFS-13891.003.patch
>
>
> HDFS Router should support issuing HDFS delegation tokens through WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13972) RBF: Support for Delegation Token (WebHDFS)

2019-03-01 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-13972:
---
Attachment: HDFS-13972-HDFS-13891.003.patch

> RBF: Support for Delegation Token (WebHDFS)
> ---
>
> Key: HDFS-13972
> URL: https://issues.apache.org/jira/browse/HDFS-13972
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-13972-HDFS-13891.001.patch, 
> HDFS-13972-HDFS-13891.002.patch, HDFS-13972-HDFS-13891.003.patch
>
>
> HDFS Router should support issuing HDFS delegation tokens through WebHDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1208) ContainerStateMachine should set chunk data as state machine data for ratis

2019-03-01 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-1208:
--
Status: Patch Available  (was: Open)

> ContainerStateMachine should set chunk data as state machine data for ratis
> ---
>
> Key: HDDS-1208
> URL: https://issues.apache.org/jira/browse/HDDS-1208
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-1208.001.patch
>
>
> Currently ContainerStateMachine sets ContainerCommandRequestProto as state 
> machine data. This requires converting the ContainerCommandRequestProto to a 
> bytestring which leads to redundant buffer copy in case of write chunk 
> request. This can be avoided by setting the chunk data as the state machine 
> data for a log entry in ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-334) Update GettingStarted page to mention details about Ozone GenConf tool

2019-03-01 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-334:
---
Component/s: documentation

> Update GettingStarted page to mention details about Ozone GenConf tool
> --
>
> Key: HDDS-334
> URL: https://issues.apache.org/jira/browse/HDDS-334
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document, documentation
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: documentation
> Fix For: 0.2.1
>
> Attachments: HDDS-334.001.patch, HDDS-334.002.patch
>
>
> Add description about Ozone GenConf tool in GettingStarted page



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-134) SCM CA: OM sends CSR and uses certificate issued by SCM

2019-03-01 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781994#comment-16781994
 ] 

Ajay Kumar commented on HDDS-134:
-

[~xyao] thanks for review, PR to add scm name in failing test.

> SCM CA: OM sends CSR and uses certificate issued by SCM
> ---
>
> Key: HDDS-134
> URL: https://issues.apache.org/jira/browse/HDDS-134
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-134.00.patch, HDDS-134.01.patch, HDDS-134.02.patch, 
> HDDS-134.03.patch, HDDS-134.04.patch, HDDS-134.05.patch, HDDS-134.06.patch, 
> HDDS-134.07.patch, HDDS-134.08.patch, HDDS-134.09.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Initialize OM keypair and get SCM signed certificate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1211) Test SCMChillMode failing randomly in Jenkins run

2019-03-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1211:
-
Target Version/s: 0.4.0

> Test SCMChillMode failing randomly in Jenkins run
> -
>
> Key: HDDS-1211
> URL: https://issues.apache.org/jira/browse/HDDS-1211
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) 
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
>  at 
> java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
>  at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748) at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:389) at 
> org.apache.hadoop.ozone.om.TestScmChillMode.testSCMChillMode(TestScmChillMode.java:286)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1211) Test SCMChillMode failing randomly in Jenkins run

2019-03-01 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1211:
-
Status: Patch Available  (was: Open)

> Test SCMChillMode failing randomly in Jenkins run
> -
>
> Key: HDDS-1211
> URL: https://issues.apache.org/jira/browse/HDDS-1211
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) 
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
>  at 
> java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
>  at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748) at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:389) at 
> org.apache.hadoop.ozone.om.TestScmChillMode.testSCMChillMode(TestScmChillMode.java:286)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1211) Test SCMChillMode failing randomly in Jenkins run

2019-03-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1211?focusedWorklogId=206606=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-206606
 ]

ASF GitHub Bot logged work on HDDS-1211:


Author: ASF GitHub Bot
Created on: 01/Mar/19 19:26
Start Date: 01/Mar/19 19:26
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #543: 
HDDS-1211. Test SCMChillMode failing randomly in Jenkins run
URL: https://github.com/apache/hadoop/pull/543
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 206606)
Time Spent: 10m
Remaining Estimate: 0h

> Test SCMChillMode failing randomly in Jenkins run
> -
>
> Key: HDDS-1211
> URL: https://issues.apache.org/jira/browse/HDDS-1211
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) 
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
>  at 
> java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
>  at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748) at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:389) at 
> org.apache.hadoop.ozone.om.TestScmChillMode.testSCMChillMode(TestScmChillMode.java:286)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1211) Test SCMChillMode failing randomly in Jenkins run

2019-03-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1211:
-
Labels: pull-request-available  (was: )

> Test SCMChillMode failing randomly in Jenkins run
> -
>
> Key: HDDS-1211
> URL: https://issues.apache.org/jira/browse/HDDS-1211
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> java.lang.Thread.State: TIMED_WAITING at sun.misc.Unsafe.park(Native Method) 
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at 
> java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
>  at 
> java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
>  at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748) at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:389) at 
> org.apache.hadoop.ozone.om.TestScmChillMode.testSCMChillMode(TestScmChillMode.java:286)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-864) Use strongly typed codec implementations for the tables of the OmMetadataManager

2019-03-01 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-864:
---
Component/s: Ozone Manager

> Use strongly typed codec implementations for the tables of the 
> OmMetadataManager
> 
>
> Key: HDDS-864
> URL: https://issues.apache.org/jira/browse/HDDS-864
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM, Ozone Manager
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-864.001.patch, HDDS-864.002.patch, 
> HDDS-864.003.patch, HDDS-864.004.patch
>
>
> HDDS-748 provides a way to use higher level, strongly typed metadata Tables, 
> such as Table instead of Table
> HDDS-748 provides the new TypedTable in this jira I would fix the 
> OmMetadataManagerImpl to use the type-safe tables instead of the raw ones.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14328) [Clean-up] Remove NULL check before instanceof in TestGSet

2019-03-01 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-14328:
--
Attachment: HDFS-14328.001.patch

> [Clean-up] Remove NULL check before instanceof in TestGSet
> --
>
> Key: HDFS-14328
> URL: https://issues.apache.org/jira/browse/HDFS-14328
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-793) Support custom key/value annotations on volume/bucket/key

2019-03-01 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-793:
---
Component/s: (was: OM)

> Support custom key/value annotations on volume/bucket/key
> -
>
> Key: HDDS-793
> URL: https://issues.apache.org/jira/browse/HDDS-793
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Ozone Manager
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-793.001.patch, HDDS-793.002.patch, 
> HDDS-793.003.patch, HDDS-793.004.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I propose to add a custom Map annotation field to 
> objects/buckets and keys in Ozone Manager.
> It would enable to build any extended functionality on top of the OM's 
> generic interface. For example:
>  * Support tags in Ozone S3 gateway 
> (https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGETtagging.html)
>  * Support md5 based ETags in s3g
>  * Store s3 related authorization data (ACLs, policies) together with the 
> parent objects
> As an optional feature (could be implemented later) the client can defined 
> the exposed annotations. For example s3g can defined which annotations should 
> be read from rocksdb on OM side and sent the the client (s3g)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-707) Allow registering MBeans without additional jmx properties

2019-03-01 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-707:
---
Component/s: Ozone Manager

> Allow registering MBeans without additional jmx properties
> --
>
> Key: HDDS-707
> URL: https://issues.apache.org/jira/browse/HDDS-707
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: OM, Ozone Datanode, Ozone Manager, SCM
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Blocker
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-707.01.patch, HDDS-707.02.patch, HDDS-707.03.patch
>
>
> HDDS and Ozone use the MBeans.register overload added by HADOOP-15339. This 
> is missing in Apache Hadoop 3.1.0 and earlier. This prevents us from building 
> Ozone with earlier versions of Hadoop. More commonly, we see runtime 
> exceptions if an earlier version of the Hadoop-common jar happens to be in 
> the classpath.
> Let's add a reflection-based switch to invoke the right version of the API so 
> we can build and use Ozone with Apache Hadoop 3.1.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1072) Implement RetryProxy and FailoverProxy for OM client

2019-03-01 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781993#comment-16781993
 ] 

Hanisha Koneru commented on HDDS-1072:
--

[~elek], I am sorry I missed your comment and committed the patch. 
I missed adding the Tracing proxy in this patch. I have opened DDS-1212 to fix 
this.

> Implement RetryProxy and FailoverProxy for OM client
> 
>
> Key: HDDS-1072
> URL: https://issues.apache.org/jira/browse/HDDS-1072
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-1072.001.patch, HDDS-1072.002.patch, 
> HDDS-1072.003.patch, HDDS-1072.004.patch, HDDS-1072.005.patch, 
> HDDS-1072.006.patch
>
>
> RPC Client should implement a retry and failover proxy provider to failover 
> between OM Ratis clients. The failover should occur in two scenarios:
> # When the client is unable to connect to the OM (either because of network 
> issues or because the OM is down). The client retry proxy provider should 
> failover to next OM in the cluster.
> # When OM Ratis Client receives a response from the Ratis server for its 
> request, it also gets the LeaderId of server which processed this request 
> (the current Leader OM nodeId). This information should be propagated back to 
> the client. The Client failover Proxy provider should failover to the leader 
> OM node. This helps avoid an extra hop from Follower OM Ratis Client to 
> Leader OM Ratis server for every request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1072) Implement RetryProxy and FailoverProxy for OM client

2019-03-01 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781993#comment-16781993
 ] 

Hanisha Koneru edited comment on HDDS-1072 at 3/1/19 7:21 PM:
--

[~elek], I am sorry I missed your comment and committed the patch. 
 I missed adding the Tracing proxy in this patch. I have opened HDDS-1212 to 
fix this.


was (Author: hanishakoneru):
[~elek], I am sorry I missed your comment and committed the patch. 
I missed adding the Tracing proxy in this patch. I have opened DDS-1212 to fix 
this.

> Implement RetryProxy and FailoverProxy for OM client
> 
>
> Key: HDDS-1072
> URL: https://issues.apache.org/jira/browse/HDDS-1072
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-1072.001.patch, HDDS-1072.002.patch, 
> HDDS-1072.003.patch, HDDS-1072.004.patch, HDDS-1072.005.patch, 
> HDDS-1072.006.patch
>
>
> RPC Client should implement a retry and failover proxy provider to failover 
> between OM Ratis clients. The failover should occur in two scenarios:
> # When the client is unable to connect to the OM (either because of network 
> issues or because the OM is down). The client retry proxy provider should 
> failover to next OM in the cluster.
> # When OM Ratis Client receives a response from the Ratis server for its 
> request, it also gets the LeaderId of server which processed this request 
> (the current Leader OM nodeId). This information should be propagated back to 
> the client. The Client failover Proxy provider should failover to the leader 
> OM node. This helps avoid an extra hop from Follower OM Ratis Client to 
> Leader OM Ratis server for every request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1212) Add Tracing back to OzoneManagerProtocol

2019-03-01 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-1212:


 Summary: Add Tracing back to OzoneManagerProtocol
 Key: HDDS-1212
 URL: https://issues.apache.org/jira/browse/HDDS-1212
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


HDDS-1072 removed Tracing from OM proxy as reported by [~elek]. We should add 
this back to get the tracing information. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14328) [Clean-up] Remove NULL check before instanceof in TestGSet

2019-03-01 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-14328:
--
Attachment: (was: HDFS-14328.001.patch)

> [Clean-up] Remove NULL check before instanceof in TestGSet
> --
>
> Key: HDFS-14328
> URL: https://issues.apache.org/jira/browse/HDFS-14328
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-166) Create a landing page for Ozone

2019-03-01 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-166:
---
Component/s: (was: document)

> Create a landing page for Ozone
> ---
>
> Key: HDDS-166
> URL: https://issues.apache.org/jira/browse/HDDS-166
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: documentation
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: ozone-site-rendered.tar.gz, ozone-site-source.tar.gz
>
>
> As Ozone release cycle is seprated from hadoop we need a separated page to 
> publish the releases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-628) Fix outdated names used in HDDS documentations

2019-03-01 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-628:
---
Component/s: (was: document)

> Fix outdated names used in HDDS documentations
> --
>
> Key: HDDS-628
> URL: https://issues.apache.org/jira/browse/HDDS-628
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.2.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 0.3.0, 0.4.0
>
> Attachments: HDDS-628.001.patch, HDDS-628.002.patch
>
>
> Take some time for reading the whole HDDS documentations in web site. I found 
> some outdated names used there.
> dozone.html: the name {{corona}} should be {{freon}} in following:
>  1)."*Here are the instructions to run corona in a docker based cluster*."
>  2). "*Now we can execute corona for load generation*."
> hdds.html: looks like {{KSM}} is the old name, now is OM
>  "*To put a key, a client makes a call to KSM with the following arguments.*"
> Some others nits can also be fixed here:
> ozonemanager.html:.*..OM talk to SCM ozonemanager*. talk --> talks
> javaapi.html: *And to get a a RPC client we can call..*. Duplicate a..
> settings.html: *Please untar the ozone-0.2.1-SNAPSHOT to the directory..* I 
> prefer use ozone- instead of explict version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2019-03-01 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-119:
---
Component/s: (was: document)

> Skip Apache license header check for some ozone doc scripts
> ---
>
> Key: HDDS-119
> URL: https://issues.apache.org/jira/browse/HDDS-119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-119.00.patch, HDDS-119.01.patch, HDDS-119.02.patch, 
> HDDS-119.03.patch
>
>
> {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-147) Update Ozone site docs

2019-03-01 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-147:
---
Component/s: (was: document)

> Update Ozone site docs
> --
>
> Key: HDDS-147
> URL: https://issues.apache.org/jira/browse/HDDS-147
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: documentation
> Fix For: 0.2.1
>
> Attachments: HDDS-147.01.patch, HDDS-147.02.patch, HDDS-147.03.patch, 
> HDDS-147.04.patch, HDDS-147.05.patch
>
>
> Ozone site docs need a few updates to the command syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-435) Enhance the existing ozone documentation

2019-03-01 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-435:
---
Component/s: (was: document)

> Enhance the existing ozone documentation
> 
>
> Key: HDDS-435
> URL: https://issues.apache.org/jira/browse/HDDS-435
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-435-ozone-0.2.001.patch, 
> HDDS-435-ozone-0.2.004.patch, HDDS-435-ozone-0.2.005.patch, 
> HDDS-435.002.patch, HDDS-435.003.patch
>
>
> hadoop-ozone/docs contains some documentation but it covers only a limit set 
> of ozone features.
> I imported the documentation from HDFS-12664 (which was written by [~anu]) 
> and updated the files according to the latest changes.
> Also adjusted the structure of the documentation site (with using sub menus), 
> started to use syntax highlighting.
> I also modified the dist script to include the docs file in the root folder 
> of the distribution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-334) Update GettingStarted page to mention details about Ozone GenConf tool

2019-03-01 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-334:
---
Component/s: (was: document)

> Update GettingStarted page to mention details about Ozone GenConf tool
> --
>
> Key: HDDS-334
> URL: https://issues.apache.org/jira/browse/HDDS-334
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: documentation
> Fix For: 0.2.1
>
> Attachments: HDDS-334.001.patch, HDDS-334.002.patch
>
>
> Add description about Ozone GenConf tool in GettingStarted page



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-630) Rename KSM to OM in Hdds.md

2019-03-01 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-630:
---
Component/s: (was: document)

> Rename KSM to OM in Hdds.md
> ---
>
> Key: HDDS-630
> URL: https://issues.apache.org/jira/browse/HDDS-630
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 0.3.0
>
> Attachments: HDDS-630.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-320) Failed to start container with apache/hadoop-runner image.

2019-03-01 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-320:
---
Component/s: (was: document)

> Failed to start container with apache/hadoop-runner image.
> --
>
> Key: HDDS-320
> URL: https://issues.apache.org/jira/browse/HDDS-320
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
> Environment: centos 7.4
>Reporter: Junjie Chen
>Priority: Minor
>
> Following the doc in hadoop-ozone/doc/content/GettingStarted.md, the 
> docker-compose up -d step failed, the error list list below:
> [root@VM_16_5_centos ozone]# docker-compose logs
> Attaching to ozone_scm_1, ozone_datanode_1, ozone_ozoneManager_1
> datanode_1  | Traceback (most recent call last):
> datanode_1  |   File "/opt/envtoconf.py", line 104, in 
> datanode_1  | Simple(sys.argv[1:]).main()
> datanode_1  |   File "/opt/envtoconf.py", line 93, in main
> datanode_1  | self.process_envs()
> datanode_1  |   File "/opt/envtoconf.py", line 67, in process_envs
> datanode_1  | with open(self.destination_file_path(name, extension) + 
> ".raw", "w") as myfile:
> datanode_1  | IOError: [Errno 13] Permission denied: 
> '/opt/hadoop/etc/hadoop/log4j.properties.raw'
> datanode_1  | Traceback (most recent call last):
> datanode_1  |   File "/opt/envtoconf.py", line 104, in 
> datanode_1  | Simple(sys.argv[1:]).main()
> datanode_1  |   File "/opt/envtoconf.py", line 93, in main
> datanode_1  | self.process_envs()
> datanode_1  |   File "/opt/envtoconf.py", line 67, in process_envs
> datanode_1  | with open(self.destination_file_path(name, extension) + 
> ".raw", "w") as myfile:
> ozoneManager_1  | with open(self.destination_file_path(name, extension) + 
> ".raw", "w") as myfile:
> ozoneManager_1  | IOError: [Errno 13] Permission denied: 
> '/opt/hadoop/etc/hadoop/log4j.properties.raw'
> ozoneManager_1  | Traceback (most recent call last):
> ozoneManager_1  |   File "/opt/envtoconf.py", line 104, in 
> ozoneManager_1  | Simple(sys.argv[1:]).main()
> ozoneManager_1  |   File "/opt/envtoconf.py", line 93, in main
> ozoneManager_1  | self.process_envs()
> ozoneManager_1  |   File "/opt/envtoconf.py", line 67, in process_envs
>  
> ozoneManager_1  | with open(self.destination_file_path(name, extension) + 
> ".raw", "w") as myfile:  
> ozoneManager_1  | IOError: [Errno 13] Permission denied: 
> '/opt/hadoop/etc/hadoop/log4j.properties.raw' 
> scm_1   | Traceback (most recent call last):
> scm_1   |   File "/opt/envtoconf.py", line 104, in
>  
> scm_1   | Simple(sys.argv[1:]).main()
> scm_1   |   File "/opt/envtoconf.py", line 93, in main
> scm_1   | self.process_envs()
> scm_1   |   File "/opt/envtoconf.py", line 67, in process_envs
>  
> scm_1   | with open(self.destination_file_path(name, extension) + 
> ".raw", "w") as myfile:  
> scm_1   | IOError: [Errno 13] Permission denied: 
> '/opt/hadoop/etc/hadoop/log4j.properties.raw' 
> scm_1   | Traceback (most recent call last):
> scm_1   |   File "/opt/envtoconf.py", line 104, in
>  
> scm_1   | Simple(sys.argv[1:]).main()
> scm_1   |   File "/opt/envtoconf.py", line 93, in main
> scm_1   | self.process_envs()
> scm_1   |   File "/opt/envtoconf.py", line 67, in process_envs
>  
> scm_1   | with open(self.destination_file_path(name, extension) + 
> ".raw", "w") as myfile:  
> scm_1   | IOError: [Errno 13] Permission denied: 
> '/opt/hadoop/etc/hadoop/log4j.properties.raw' 
> scm_1   | Traceback (most recent call last):
> scm_1   |   File "/opt/envtoconf.py", line 104, in
>  
> scm_1   | Simple(sys.argv[1:]).main()
> scm_1   |   File "/opt/envtoconf.py", line 93, in main
> scm_1   | self.process_envs()
> scm_1   |   File "/opt/envtoconf.py", line 67, in process_envs
>  
> scm_1   | with open(self.destination_file_path(name, extension) + 
> ".raw", "w") as myfile:  
> scm_1

[jira] [Updated] (HDDS-367) Cleanup GettingStarted page to remove usage of ksm

2019-03-01 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-367:
---
Component/s: (was: document)

> Cleanup GettingStarted page to remove usage of ksm
> --
>
> Key: HDDS-367
> URL: https://issues.apache.org/jira/browse/HDDS-367
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Trivial
>
> As part of HDDS-167, KeySpaceManager was renamed to OzoneManager and thus 
> usage of ksm was replaced with om.
> There are still some traces of ksm in Getting Started page where the commands 
> and configuration properties use ksm instead of om.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-307) docs link on ozone website is broken

2019-03-01 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-307:
---
Component/s: (was: document)

> docs link on ozone website is broken
> 
>
> Key: HDDS-307
> URL: https://issues.apache.org/jira/browse/HDDS-307
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Arpit Agarwal
>Assignee: Xiaoyu Yao
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-307.00.patch, hadoop-ozonesite-public.tar
>
>
> The docs link on _ozone.hadoop.apache.org_ is broken:
> [http://ozone.hadoop.apache.org/docs]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-147) Update Ozone site docs

2019-03-01 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-147:
---
Component/s: documentation

> Update Ozone site docs
> --
>
> Key: HDDS-147
> URL: https://issues.apache.org/jira/browse/HDDS-147
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: document, documentation
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Major
>  Labels: documentation
> Fix For: 0.2.1
>
> Attachments: HDDS-147.01.patch, HDDS-147.02.patch, HDDS-147.03.patch, 
> HDDS-147.04.patch, HDDS-147.05.patch
>
>
> Ozone site docs need a few updates to the command syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-367) Cleanup GettingStarted page to remove usage of ksm

2019-03-01 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-367:
---
Component/s: documentation

> Cleanup GettingStarted page to remove usage of ksm
> --
>
> Key: HDDS-367
> URL: https://issues.apache.org/jira/browse/HDDS-367
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document, documentation
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Trivial
>
> As part of HDDS-167, KeySpaceManager was renamed to OzoneManager and thus 
> usage of ksm was replaced with om.
> There are still some traces of ksm in Getting Started page where the commands 
> and configuration properties use ksm instead of om.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-119) Skip Apache license header check for some ozone doc scripts

2019-03-01 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-119:
---
Component/s: documentation

> Skip Apache license header check for some ozone doc scripts
> ---
>
> Key: HDDS-119
> URL: https://issues.apache.org/jira/browse/HDDS-119
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document, documentation
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-119.00.patch, HDDS-119.01.patch, HDDS-119.02.patch, 
> HDDS-119.03.patch
>
>
> {code}
> Lines that start with ? in the ASF License report indicate files that do 
> not have an Apache license header: !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/theme.toml !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/fonts/glyphicons-halflings-regular.svg
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/bootstrap.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/js/jquery.min.js 
> !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css.map
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap.min.css
>  !? 
> /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/static/css/bootstrap-theme.min.css.map
>  !? /testptch/hadoop/hadoop-ozone/docs/themes/ozonedoc/layouts/index.html 
> !? /testptch/hadoop/hadoop-ozone/docs/static/OzoneOverview.svg
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-630) Rename KSM to OM in Hdds.md

2019-03-01 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-630:
---
Component/s: documentation

> Rename KSM to OM in Hdds.md
> ---
>
> Key: HDDS-630
> URL: https://issues.apache.org/jira/browse/HDDS-630
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: document, documentation
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Minor
> Fix For: 0.3.0
>
> Attachments: HDDS-630.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >