[GitHub] [hadoop] arp7 commented on a change in pull request #620: HDDS-1205. Refactor ReplicationManager to handle QUASI_CLOSED contain…

2019-03-19 Thread GitBox
arp7 commented on a change in pull request #620: HDDS-1205. Refactor 
ReplicationManager to handle QUASI_CLOSED contain…
URL: https://github.com/apache/hadoop/pull/620#discussion_r267171595
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
 ##
 @@ -186,16 +197,18 @@ private void run() {
   while (running) {
 try {
   final long start = Time.monotonicNow();
-  final List containerIds =
+  final Set containerIds =
   containerManager.getContainerIDs();
-  containerIds.parallelStream().forEach(this::processContainer);
-  LOG.debug("Replication Monitor Thread took {} milliseconds for" +
+  containerIds.forEach(this::processContainer);
+  LOG.info("Replication Monitor Thread took {} milliseconds for" +
   " processing {} containers.", Time.monotonicNow() - start,
   containerIds.size());
-  Thread.sleep(interval);
+  if (!Thread.interrupted()) {
+Thread.sleep(interval);
+  }
 } catch (InterruptedException ex) {
   // Wakeup and process the containers.
-  LOG.debug("Replication Monitor Thread got interrupt exception.");
+  LOG.debug("Replication Monitor Thread got interrupted.");
 
 Review comment:
   Sorry I missed this earlier. We should call 
Thread.getCurrentThread().interrupt() here. https://stackoverflow.com/a/4906814 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #620: HDDS-1205. Refactor ReplicationManager to handle QUASI_CLOSED contain…

2019-03-19 Thread GitBox
arp7 commented on a change in pull request #620: HDDS-1205. Refactor 
ReplicationManager to handle QUASI_CLOSED contain…
URL: https://github.com/apache/hadoop/pull/620#discussion_r267171164
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
 ##
 @@ -0,0 +1,686 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.container;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.container.placement.algorithms
+.ContainerPlacementPolicy;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.lock.LockManager;
+import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
+import org.apache.hadoop.ozone.protocol.commands.DeleteContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.ReplicateContainerCommand;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Time;
+import org.apache.ratis.util.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Predicate;
+import java.util.stream.Collectors;
+
+/**
+ * Replication Manager (RM) is the one which is responsible for making sure
+ * that the containers are properly replicated. Replication Manager deals only
+ * with Quasi Closed / Closed container.
+ */
+public class ReplicationManager {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReplicationManager.class);
+
+  /**
+   * Reference to the ContainerManager.
+   */
+  private final ContainerManager containerManager;
+
+  /**
+   * PlacementPolicy which is used to identify where a container
+   * should be copied.
+   */
+  private final ContainerPlacementPolicy containerPlacement;
+
+  /**
+   * EventPublisher to fire Replicate and Delete container commands.
+   */
+  private final EventPublisher eventPublisher;
+
+  /**
+   * Used for locking a container with its ID while processing it.
+   */
+  private final LockManager lockManager;
 
 Review comment:
   We discussed this offline and I think the agreement was to fix the locking 
in container report processor.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-03-19 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796710#comment-16796710
 ] 

Yuming Wang commented on HADOOP-16152:
--

[~ste...@apache.org] [~jojochuang] Could you help review this patch? The 
conflict only occurs when running the Spark YARN tests. I tried to workaround 
this issue at Spark side, but all failed:
 # Replace {{hadoop-yarn-server-tests}} for {{hadoop-client-minicluster}} and 
still have various class conflicts. [This is an 
example|https://github.com/wangyum/spark-hadoop-client-minicluster] of Spark 
for running YARN tests.
 # Change the version of jetty in the YARN module when testing with hadoop-3, 
Spark core module still 9.4.x. In this case it is evicted by 9.4.12.v20180830:
{noformat}
org.eclipse.jetty:jetty-servlet:9.4.12.v20180830 is selected over 
9.3.24.v20180605
{noformat}

> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16152.v1.patch
>
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> [https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141]
> Calcite: 
> [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
> Hive: https://issues.apache.org/jira/browse/HIVE-21211



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-19 Thread GitBox
hadoop-yetus commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267152767
 
 

 ##
 File path: 
hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-image/docker-krb5/README.md
 ##
 @@ -0,0 +1,34 @@
+
+
+# Experimental UNSECURE krb5 Kerberos container.
+
+Only for development. Not for production.
+
+The docker image contains a rest service which provides keystore and keytab 
files without any authentication!
+
+Master password: Welcome1
+
+Principal: admin/ad...@example.com Password: Welcome1
+
+Test:
+
+```
+docker run --net=host krb5
+
+docker run --net=host -it --entrypoint=bash krb5
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-19 Thread GitBox
hadoop-yetus commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267152746
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozonesecure-mr/README.md
 ##
 @@ -0,0 +1,56 @@
+
+# Secure Docker-compose with KMS, Yarn RM and NM
+This docker compose allows to test Sample Map Reduce Jobs with OzoneFileSystem
+It is a superset of ozonesecure docker-compose, which add Yarn NM/RM in 
addition
+to Ozone OM/SCM/NM/DN and Kerberos KDC. 
+
+## Basic setup
+
+```
+docker-compose up -d
+```
+
+## Ozone Manager Setup
+
+```
+kinit -kt /etc/security/keytabs/testuser.keytab testuser/o...@example.com
+
+ozone sh volume create /vol1
+ozone sh bucket create /vol1/bucket1
+ozone sh key put /vol1/bucket1/key1 LICENSE.txt
+
+ozone fs -ls o3fs://bucket1.vol1/
+```
+
+## Yarn Resource Manager Setup
+```
+kinit -kt /etc/security/keytabs/testuser.keytab testuser/r...@example.com
+export HADOOP_MAPRED_HOME=/opt/hadoop/share/hadoop/mapreduce
+
+export 
HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/opt/ozone/share/ozone/lib/hadoop-ozone-filesystem-lib-current-0.5.0-SNAPSHOT.jar
+
+hadoop fs -mkdir /user
+hadoop fs -mkdir /user/root
+
+
+```
+
+## Run Examples
+
+### WordCount
+
+```
+yarn jar $HADOOP_MAPRED_HOME/hadoop-mapreduce-examples-*.jar wordcount 
o3fs://bucket1.vol1/key1 o3fs://bucket1.vol1/key1.count 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-19 Thread GitBox
hadoop-yetus commented on issue #627: HDDS-1299. Support TokenIssuer interface 
for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#issuecomment-474641750
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 67 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1006 | trunk passed |
   | +1 | compile | 114 | trunk passed |
   | +1 | checkstyle | 42 | trunk passed |
   | +1 | mvnsite | 145 | trunk passed |
   | +1 | shadedclient | 658 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 126 | trunk passed |
   | +1 | javadoc | 105 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for patch |
   | -1 | mvninstall | 22 | dist in the patch failed. |
   | +1 | compile | 95 | the patch passed |
   | +1 | javac | 95 | the patch passed |
   | -0 | checkstyle | 26 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | -1 | hadolint | 1 | The patch generated 5 new + 0 unchanged - 0 fixed = 5 
total (was 0) |
   | +1 | mvnsite | 106 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 18 | There were no new shelldocs issues. |
   | -1 | whitespace | 0 | The patch has 4 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 751 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 137 | the patch passed |
   | +1 | javadoc | 92 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 30 | client in the patch passed. |
   | +1 | unit | 45 | ozone-manager in the patch passed. |
   | +1 | unit | 116 | ozonefs in the patch passed. |
   | +1 | unit | 25 | dist in the patch passed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 4076 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-627/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/627 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  yamllint  shellcheck  
shelldocs  hadolint  |
   | uname | Linux 01d69533ed88 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 310ebf5 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-627/1/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-627/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | hadolint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-627/1/artifact/out/diff-patch-hadolint.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-627/1/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-627/1/testReport/ |
   | Max. process+thread count | 3126 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/client hadoop-ozone/ozone-manager 
hadoop-ozone/ozonefs hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-627/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-19 Thread GitBox
hadoop-yetus commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267152743
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozonesecure-mr/README.md
 ##
 @@ -0,0 +1,56 @@
+
+# Secure Docker-compose with KMS, Yarn RM and NM
+This docker compose allows to test Sample Map Reduce Jobs with OzoneFileSystem
+It is a superset of ozonesecure docker-compose, which add Yarn NM/RM in 
addition
+to Ozone OM/SCM/NM/DN and Kerberos KDC. 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-19 Thread GitBox
hadoop-yetus commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267152750
 
 

 ##
 File path: 
hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-compose.yaml
 ##
 @@ -0,0 +1,103 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+version: "3"
+services:
+  kdc:
+build:
+  context: docker-image/docker-krb5
+  dockerfile: Dockerfile-krb5
+  args:
+buildno: 1
+hostname: kdc
+volumes:
+  - ../..:/opt/hadoop
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #626: HDDS-1262. In OM HA OpenKey call Should happen only leader OM.

2019-03-19 Thread GitBox
hadoop-yetus commented on issue #626: HDDS-1262. In OM HA OpenKey call Should 
happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#issuecomment-474634424
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 514 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 49 | Maven dependency ordering for branch |
   | +1 | mvninstall | 983 | trunk passed |
   | +1 | compile | 95 | trunk passed |
   | +1 | checkstyle | 28 | trunk passed |
   | +1 | mvnsite | 109 | trunk passed |
   | +1 | shadedclient | 811 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 98 | trunk passed |
   | +1 | javadoc | 81 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for patch |
   | +1 | mvninstall | 101 | the patch passed |
   | +1 | compile | 90 | the patch passed |
   | +1 | cc | 90 | the patch passed |
   | +1 | javac | 90 | the patch passed |
   | -0 | checkstyle | 22 | hadoop-ozone: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) |
   | +1 | mvnsite | 86 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 744 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 123 | the patch passed |
   | +1 | javadoc | 76 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 36 | common in the patch passed. |
   | -1 | unit | 41 | ozone-manager in the patch failed. |
   | -1 | unit | 617 | integration-test in the patch failed. |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 4771 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.ratis.TestOzoneManagerRatisServer |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/626 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux 502ca71d7a1b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 310ebf5 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/1/artifact/out/patch-unit-hadoop-ozone_ozone-manager.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/1/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/1/testReport/ |
   | Max. process+thread count | 4123 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao opened a new pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-19 Thread GitBox
xiaoyuyao opened a new pull request #627: HDDS-1299. Support TokenIssuer 
interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #600: HDFS-14348: Fix JNI exception handling issues in libhdfs

2019-03-19 Thread GitBox
hadoop-yetus commented on issue #600: HDFS-14348: Fix JNI exception handling 
issues in libhdfs
URL: https://github.com/apache/hadoop/pull/600#issuecomment-474623367
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1130 | trunk passed |
   | +1 | compile | 108 | trunk passed |
   | +1 | mvnsite | 20 | trunk passed |
   | +1 | shadedclient | 1985 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 19 | the patch passed |
   | +1 | compile | 107 | the patch passed |
   | +1 | cc | 107 | the patch passed |
   | +1 | javac | 107 | the patch passed |
   | +1 | mvnsite | 18 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 826 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 403 | hadoop-hdfs-native-client in the patch passed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 3558 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-600/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/600 |
   | JIRA Issue | HDFS-14348 |
   | Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
   | uname | Linux 07504a4b884c 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 310ebf5 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-600/4/testReport/ |
   | Max. process+thread count | 304 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-600/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #626: HDDS-1262. In OM HA OpenKey call Should happen only leader OM.

2019-03-19 Thread GitBox
bharatviswa504 opened a new pull request #626: HDDS-1262. In OM HA OpenKey call 
Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #625: HDDS-1262. In OM HA OpenKey call Should happen only leader OM.

2019-03-19 Thread GitBox
hadoop-yetus commented on issue #625: HDDS-1262. In OM HA OpenKey call Should 
happen only leader OM.
URL: https://github.com/apache/hadoop/pull/625#issuecomment-474604625
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 6 | https://github.com/apache/hadoop/pull/625 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/625 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-625/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #625: HDDS-1262. In OM HA OpenKey call Should happen only leader OM.

2019-03-19 Thread GitBox
bharatviswa504 opened a new pull request #625: HDDS-1262. In OM HA OpenKey call 
Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/625
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 closed pull request #625: HDDS-1262. In OM HA OpenKey call Should happen only leader OM.

2019-03-19 Thread GitBox
bharatviswa504 closed pull request #625: HDDS-1262. In OM HA OpenKey call 
Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/625
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-19 Thread GitBox
hadoop-yetus commented on issue #595: HDFS-14304: High lock contention on 
hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#issuecomment-474598543
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 983 | trunk passed |
   | +1 | compile | 96 | trunk passed |
   | +1 | mvnsite | 23 | trunk passed |
   | +1 | shadedclient | 1761 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 15 | the patch passed |
   | +1 | compile | 91 | the patch passed |
   | +1 | cc | 91 | the patch passed |
   | +1 | javac | 91 | the patch passed |
   | +1 | mvnsite | 15 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 680 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 339 | hadoop-hdfs-native-client in the patch passed. |
   | +1 | asflicense | 23 | The patch does not generate ASF License warnings. |
   | | | 3092 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/595 |
   | JIRA Issue | HDFS-14304 |
   | Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
   | uname | Linux 85f868aca893 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1639071 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/4/testReport/ |
   | Max. process+thread count | 410 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-19 Thread GitBox
sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock 
contention on hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#discussion_r267097855
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c
 ##
 @@ -534,6 +545,21 @@ JNIEnv* getJNIEnv(void)
 
 state->env = getGlobalJNIEnv();
 mutexUnlock();
+
+if (!jclassesInitialized) {
+  mutexLock();
 
 Review comment:
   Fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-19 Thread GitBox
sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock 
contention on hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#discussion_r267097589
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jclasses.h
 ##
 @@ -0,0 +1,117 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef LIBHDFS_JCLASSES_H
+#define LIBHDFS_JCLASSES_H
+
+#include 
+
+/**
+ * Encapsulates logic to cache jclass objects so they can re-used across
+ * calls to FindClass. Creating jclass objects every time libhdfs has to
+ * invoke a method can hurt performance. By cacheing jclass objects we avoid
+ * this overhead.
+ *
+ * We use the term "cached" here loosely; jclasses are not truly cached,
+ * instead they are created once during JVM load and are kept alive until the
+ * process shutdowns. There is no eviction of jclass objects.
+ *
+ * @see https://www.ibm.com/developerworks/library/j-jni/index.html#notc
+ */
+
+/**
+ * Each enum value represents one jclass that is cached. Enum values should
+ * be passed to getJclass or getName to get the jclass object or class name
+ * represented by the enum value.
+ */
+typedef enum {
+JC_CONFIGURATION,
+JC_PATH,
+JC_FILE_SYSTEM,
+JC_FS_STATUS,
+JC_FILE_UTIL,
+JC_BLOCK_LOCATION,
+JC_DFS_HEDGED_READ_METRICS,
+JC_DISTRIBUTED_FILE_SYSTEM,
+JC_FS_DATA_INPUT_STREAM,
+JC_FS_DATA_OUTPUT_STREAM,
+JC_FILE_STATUS,
+JC_FS_PERMISSION,
+JC_READ_STATISTICS,
+JC_HDFS_DATA_INPUT_STREAM,
+JC_DOMAIN_SOCKET,
+JC_URI,
+JC_BYTE_BUFFER,
+JC_ENUM_SET,
+JC_EXCEPTION_UTILS,
+// A special marker enum that counts the number of cached jclasses
+NUM_CACHED_CLASSES
+} CachedJavaClass;
+
+/**
+ * Whether initCachedClasses has been called or not. Protected by the mutex
+ * jclassInitMutex.
+ */
+extern int jclassesInitialized;
+
+/**
+ * Internally initializes all jclass objects listed in the CachedJavaClass 
enum.
+ */
+jthrowable initCachedClasses(JNIEnv* env);
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-19 Thread GitBox
sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock 
contention on hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#discussion_r267097547
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jclasses.h
 ##
 @@ -0,0 +1,117 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef LIBHDFS_JCLASSES_H
+#define LIBHDFS_JCLASSES_H
+
+#include 
+
+/**
+ * Encapsulates logic to cache jclass objects so they can re-used across
+ * calls to FindClass. Creating jclass objects every time libhdfs has to
+ * invoke a method can hurt performance. By cacheing jclass objects we avoid
+ * this overhead.
+ *
+ * We use the term "cached" here loosely; jclasses are not truly cached,
+ * instead they are created once during JVM load and are kept alive until the
+ * process shutdowns. There is no eviction of jclass objects.
+ *
+ * @see https://www.ibm.com/developerworks/library/j-jni/index.html#notc
+ */
+
+/**
+ * Each enum value represents one jclass that is cached. Enum values should
+ * be passed to getJclass or getName to get the jclass object or class name
+ * represented by the enum value.
+ */
+typedef enum {
+JC_CONFIGURATION,
+JC_PATH,
+JC_FILE_SYSTEM,
+JC_FS_STATUS,
+JC_FILE_UTIL,
+JC_BLOCK_LOCATION,
+JC_DFS_HEDGED_READ_METRICS,
+JC_DISTRIBUTED_FILE_SYSTEM,
+JC_FS_DATA_INPUT_STREAM,
+JC_FS_DATA_OUTPUT_STREAM,
+JC_FILE_STATUS,
+JC_FS_PERMISSION,
+JC_READ_STATISTICS,
+JC_HDFS_DATA_INPUT_STREAM,
+JC_DOMAIN_SOCKET,
+JC_URI,
+JC_BYTE_BUFFER,
+JC_ENUM_SET,
+JC_EXCEPTION_UTILS,
+// A special marker enum that counts the number of cached jclasses
+NUM_CACHED_CLASSES
+} CachedJavaClass;
+
+/**
+ * Whether initCachedClasses has been called or not. Protected by the mutex
+ * jclassInitMutex.
+ */
+extern int jclassesInitialized;
 
 Review comment:
   Moved `jclassesInitialized` to `jclasses.c` and changed it to `static` 
instead of `extern`. `initCachedClasses` acquires a lock, checks if 
`jclassesInitialized` is true or not, and conditionally loads the `jclass`es


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-19 Thread GitBox
sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock 
contention on hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#discussion_r267097811
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c
 ##
 @@ -534,6 +545,21 @@ JNIEnv* getJNIEnv(void)
 
 state->env = getGlobalJNIEnv();
 mutexUnlock();
+
+if (!jclassesInitialized) {
+  mutexLock();
+  if (!jclassesInitialized) {
 
 Review comment:
   Fixed. Removed the first `if` statement, so `initCachedClasses` always 
acquires the lock.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #623: HDDS-1308. Fix asf license errors.

2019-03-19 Thread GitBox
arp7 commented on issue #623: HDDS-1308. Fix asf license errors.
URL: https://github.com/apache/hadoop/pull/623#issuecomment-474584567
 
 
   +1 this is a comment-only change. Let's get it in so we can start seeing 
clean RAT reports from Jenkins.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 merged pull request #623: HDDS-1308. Fix asf license errors.

2019-03-19 Thread GitBox
arp7 merged pull request #623: HDDS-1308. Fix asf license errors.
URL: https://github.com/apache/hadoop/pull/623
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-19 Thread GitBox
sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock 
contention on hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#discussion_r267096318
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c
 ##
 @@ -534,6 +545,21 @@ JNIEnv* getJNIEnv(void)
 
 state->env = getGlobalJNIEnv();
 mutexUnlock();
+
+if (!jclassesInitialized) {
+  mutexLock();
+  if (!jclassesInitialized) {
+jthrowable jthr = NULL;
+jthr = initCachedClasses(state->env);
+if (jthr) {
+  printExceptionAndFree(state->env, jthr, PRINT_EXC_ALL,
+"initCachedClasses failed");
+  return NULL;
 
 Review comment:
   Yeah, fixed. I think the one above is okay; the cleanup is dependent on 
`ThreadLocalState` being created.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16156) [Clean-up] Remove NULL check before instanceof and fix checkstyle in InnerNodeImpl

2019-03-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796515#comment-16796515
 ] 

Hadoop QA commented on HADOOP-16156:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 34m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 4 unchanged - 15 fixed = 4 total (was 19) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
31s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16156 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963027/HADOOP-16156.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8e45de61652b 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5d8bd0e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16068/testReport/ |
| Max. process+thread count | 1380 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16068/console |
| Powered by | Apache Yetus 0.8.0   

[GitHub] [hadoop] hadoop-yetus commented on issue #620: HDDS-1205. Refactor ReplicationManager to handle QUASI_CLOSED contain…

2019-03-19 Thread GitBox
hadoop-yetus commented on issue #620: HDDS-1205. Refactor ReplicationManager to 
handle QUASI_CLOSED contain…
URL: https://github.com/apache/hadoop/pull/620#issuecomment-474581768
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for branch |
   | +1 | mvninstall | 989 | trunk passed |
   | +1 | compile | 75 | trunk passed |
   | +1 | checkstyle | 30 | trunk passed |
   | +1 | mvnsite | 78 | trunk passed |
   | +1 | shadedclient | 773 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 111 | trunk passed |
   | +1 | javadoc | 60 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for patch |
   | +1 | mvninstall | 71 | the patch passed |
   | +1 | compile | 68 | the patch passed |
   | +1 | javac | 68 | the patch passed |
   | +1 | checkstyle | 24 | the patch passed |
   | +1 | mvnsite | 61 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 741 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 124 | the patch passed |
   | +1 | javadoc | 56 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 63 | common in the patch passed. |
   | +1 | unit | 106 | server-scm in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3580 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-620/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/620 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux 6d8b77e65902 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 5d8bd0e |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-620/2/testReport/ |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-620/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-19 Thread GitBox
sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock 
contention on hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#discussion_r267094510
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c
 ##
 @@ -16,19 +16,17 @@
  * limitations under the License.
  */
 
+#include "jclasses.h"
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-19 Thread GitBox
hadoop-yetus commented on issue #595: HDFS-14304: High lock contention on 
hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#issuecomment-474581306
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 11 | https://github.com/apache/hadoop/pull/595 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/595 |
   | JIRA Issue | HDFS-14304 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 merged pull request #622: HDDS-1307. Test ScmChillMode testChillModeOperations failed.

2019-03-19 Thread GitBox
arp7 merged pull request #622: HDDS-1307. Test ScmChillMode 
testChillModeOperations failed.
URL: https://github.com/apache/hadoop/pull/622
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #600: HDFS-14348: Fix JNI exception handling issues in libhdfs

2019-03-19 Thread GitBox
hadoop-yetus commented on issue #600: HDFS-14348: Fix JNI exception handling 
issues in libhdfs
URL: https://github.com/apache/hadoop/pull/600#issuecomment-474577439
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1080 | trunk passed |
   | +1 | compile | 109 | trunk passed |
   | +1 | mvnsite | 22 | trunk passed |
   | +1 | shadedclient | 2005 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 20 | the patch passed |
   | +1 | compile | 119 | the patch passed |
   | -1 | cc | 119 | hadoop-hdfs-project_hadoop-hdfs-native-client generated 33 
new + 2 unchanged - 0 fixed = 35 total (was 2) |
   | +1 | javac | 119 | the patch passed |
   | +1 | mvnsite | 31 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 883 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 449 | hadoop-hdfs-native-client in the patch passed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 3755 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-600/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/600 |
   | JIRA Issue | HDFS-14348 |
   | Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
   | uname | Linux 292b60234842 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 5d8bd0e |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | cc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-600/3/artifact/out/diff-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-600/3/testReport/ |
   | Max. process+thread count | 304 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-600/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16181) HadoopExecutors shutdown Cleanup

2019-03-19 Thread David Mollitor (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796468#comment-16796468
 ] 

David Mollitor commented on HADOOP-16181:
-

[~ste...@apache.org] What do you think about the latest patch?

Thanks.

> HadoopExecutors shutdown Cleanup
> 
>
> Key: HADOOP-16181
> URL: https://issues.apache.org/jira/browse/HADOOP-16181
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HADOOP-16181.1.patch, HADOOP-16181.2.patch
>
>
> # Add method description
> # Add additional logging
> # Do not log-and-throw Exception.  Anti-pattern.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor ReplicationManager to handle QUASI_CLOSED contain…

2019-03-19 Thread GitBox
nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor 
ReplicationManager to handle QUASI_CLOSED contain…
URL: https://github.com/apache/hadoop/pull/620#discussion_r267072642
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
 ##
 @@ -0,0 +1,686 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.container;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.container.placement.algorithms
+.ContainerPlacementPolicy;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.lock.LockManager;
+import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
+import org.apache.hadoop.ozone.protocol.commands.DeleteContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.ReplicateContainerCommand;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Time;
+import org.apache.ratis.util.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Predicate;
+import java.util.stream.Collectors;
+
+/**
+ * Replication Manager (RM) is the one which is responsible for making sure
+ * that the containers are properly replicated. Replication Manager deals only
+ * with Quasi Closed / Closed container.
+ */
+public class ReplicationManager {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReplicationManager.class);
+
+  /**
+   * Reference to the ContainerManager.
+   */
+  private final ContainerManager containerManager;
+
+  /**
+   * PlacementPolicy which is used to identify where a container
+   * should be copied.
+   */
+  private final ContainerPlacementPolicy containerPlacement;
+
+  /**
+   * EventPublisher to fire Replicate and Delete container commands.
+   */
+  private final EventPublisher eventPublisher;
+
+  /**
+   * Used for locking a container with its ID while processing it.
+   */
+  private final LockManager lockManager;
+
+  /**
+   * This is used to track container replication commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightReplication;
+
+  /**
+   * This is used to track container deletion commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightDeletion;
+
+  /**
+   * ReplicationMonitor thread is the one which wakes up at configured
+   * interval and processes all the containers.
+   */
+  private final Thread replicationMonitor;
+
+  /**
+   * The frequency in which ReplicationMonitor thread should run.
+   */
+  private final long interval;
+
+  /**
+   * Timeout for container replication & deletion command issued by
+   * ReplicationManager.
+   */
+  private final long eventTimeout;
+
+  /**
+   * Flag used to check if ReplicationMonitor thread is running or not.
+   */
+  private volatile boolean running;
+
+  /**
+   * Constructs ReplicationManager instance with the given configuration.
+   *
+   * @param conf OzoneConfiguration
+   * @param containerManager ContainerManager
+   * @param containerPlacement ContainerPlacementPolicy
+   * @param eventPublisher EventPublisher
+   */
+  public ReplicationManager(final Configuration conf,
+final ContainerManager containerManager,
+

[GitHub] [hadoop] nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor ReplicationManager to handle QUASI_CLOSED contain…

2019-03-19 Thread GitBox
nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor 
ReplicationManager to handle QUASI_CLOSED contain…
URL: https://github.com/apache/hadoop/pull/620#discussion_r267072551
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
 ##
 @@ -0,0 +1,686 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.container;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.container.placement.algorithms
+.ContainerPlacementPolicy;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.lock.LockManager;
+import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
+import org.apache.hadoop.ozone.protocol.commands.DeleteContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.ReplicateContainerCommand;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Time;
+import org.apache.ratis.util.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Predicate;
+import java.util.stream.Collectors;
+
+/**
+ * Replication Manager (RM) is the one which is responsible for making sure
+ * that the containers are properly replicated. Replication Manager deals only
+ * with Quasi Closed / Closed container.
+ */
+public class ReplicationManager {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReplicationManager.class);
+
+  /**
+   * Reference to the ContainerManager.
+   */
+  private final ContainerManager containerManager;
+
+  /**
+   * PlacementPolicy which is used to identify where a container
+   * should be copied.
+   */
+  private final ContainerPlacementPolicy containerPlacement;
+
+  /**
+   * EventPublisher to fire Replicate and Delete container commands.
+   */
+  private final EventPublisher eventPublisher;
+
+  /**
+   * Used for locking a container with its ID while processing it.
+   */
+  private final LockManager lockManager;
+
+  /**
+   * This is used to track container replication commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightReplication;
+
+  /**
+   * This is used to track container deletion commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightDeletion;
+
+  /**
+   * ReplicationMonitor thread is the one which wakes up at configured
+   * interval and processes all the containers.
+   */
+  private final Thread replicationMonitor;
+
+  /**
+   * The frequency in which ReplicationMonitor thread should run.
+   */
+  private final long interval;
+
+  /**
+   * Timeout for container replication & deletion command issued by
+   * ReplicationManager.
+   */
+  private final long eventTimeout;
+
+  /**
+   * Flag used to check if ReplicationMonitor thread is running or not.
+   */
+  private volatile boolean running;
+
+  /**
+   * Constructs ReplicationManager instance with the given configuration.
+   *
+   * @param conf OzoneConfiguration
+   * @param containerManager ContainerManager
+   * @param containerPlacement ContainerPlacementPolicy
+   * @param eventPublisher EventPublisher
+   */
+  public ReplicationManager(final Configuration conf,
+final ContainerManager containerManager,
+

[GitHub] [hadoop] nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor ReplicationManager to handle QUASI_CLOSED contain…

2019-03-19 Thread GitBox
nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor 
ReplicationManager to handle QUASI_CLOSED contain…
URL: https://github.com/apache/hadoop/pull/620#discussion_r267072403
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
 ##
 @@ -0,0 +1,686 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.container;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.container.placement.algorithms
+.ContainerPlacementPolicy;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.lock.LockManager;
+import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
+import org.apache.hadoop.ozone.protocol.commands.DeleteContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.ReplicateContainerCommand;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Time;
+import org.apache.ratis.util.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Predicate;
+import java.util.stream.Collectors;
+
+/**
+ * Replication Manager (RM) is the one which is responsible for making sure
+ * that the containers are properly replicated. Replication Manager deals only
+ * with Quasi Closed / Closed container.
+ */
+public class ReplicationManager {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReplicationManager.class);
+
+  /**
+   * Reference to the ContainerManager.
+   */
+  private final ContainerManager containerManager;
+
+  /**
+   * PlacementPolicy which is used to identify where a container
+   * should be copied.
+   */
+  private final ContainerPlacementPolicy containerPlacement;
+
+  /**
+   * EventPublisher to fire Replicate and Delete container commands.
+   */
+  private final EventPublisher eventPublisher;
+
+  /**
+   * Used for locking a container with its ID while processing it.
+   */
+  private final LockManager lockManager;
+
+  /**
+   * This is used to track container replication commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightReplication;
+
+  /**
+   * This is used to track container deletion commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightDeletion;
+
+  /**
+   * ReplicationMonitor thread is the one which wakes up at configured
+   * interval and processes all the containers.
+   */
+  private final Thread replicationMonitor;
+
+  /**
+   * The frequency in which ReplicationMonitor thread should run.
+   */
+  private final long interval;
+
+  /**
+   * Timeout for container replication & deletion command issued by
+   * ReplicationManager.
+   */
+  private final long eventTimeout;
+
+  /**
+   * Flag used to check if ReplicationMonitor thread is running or not.
+   */
+  private volatile boolean running;
+
+  /**
+   * Constructs ReplicationManager instance with the given configuration.
+   *
+   * @param conf OzoneConfiguration
+   * @param containerManager ContainerManager
+   * @param containerPlacement ContainerPlacementPolicy
+   * @param eventPublisher EventPublisher
+   */
+  public ReplicationManager(final Configuration conf,
+final ContainerManager containerManager,
+

[GitHub] [hadoop] nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor ReplicationManager to handle QUASI_CLOSED contain…

2019-03-19 Thread GitBox
nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor 
ReplicationManager to handle QUASI_CLOSED contain…
URL: https://github.com/apache/hadoop/pull/620#discussion_r267072287
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
 ##
 @@ -0,0 +1,686 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.container;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.container.placement.algorithms
+.ContainerPlacementPolicy;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.lock.LockManager;
+import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
+import org.apache.hadoop.ozone.protocol.commands.DeleteContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.ReplicateContainerCommand;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Time;
+import org.apache.ratis.util.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Predicate;
+import java.util.stream.Collectors;
+
+/**
+ * Replication Manager (RM) is the one which is responsible for making sure
+ * that the containers are properly replicated. Replication Manager deals only
+ * with Quasi Closed / Closed container.
+ */
+public class ReplicationManager {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReplicationManager.class);
+
+  /**
+   * Reference to the ContainerManager.
+   */
+  private final ContainerManager containerManager;
+
+  /**
+   * PlacementPolicy which is used to identify where a container
+   * should be copied.
+   */
+  private final ContainerPlacementPolicy containerPlacement;
+
+  /**
+   * EventPublisher to fire Replicate and Delete container commands.
+   */
+  private final EventPublisher eventPublisher;
+
+  /**
+   * Used for locking a container with its ID while processing it.
+   */
+  private final LockManager lockManager;
+
+  /**
+   * This is used to track container replication commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightReplication;
+
+  /**
+   * This is used to track container deletion commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightDeletion;
+
+  /**
+   * ReplicationMonitor thread is the one which wakes up at configured
+   * interval and processes all the containers.
+   */
+  private final Thread replicationMonitor;
+
+  /**
+   * The frequency in which ReplicationMonitor thread should run.
+   */
+  private final long interval;
+
+  /**
+   * Timeout for container replication & deletion command issued by
+   * ReplicationManager.
+   */
+  private final long eventTimeout;
+
+  /**
+   * Flag used to check if ReplicationMonitor thread is running or not.
+   */
+  private volatile boolean running;
+
+  /**
+   * Constructs ReplicationManager instance with the given configuration.
+   *
+   * @param conf OzoneConfiguration
+   * @param containerManager ContainerManager
+   * @param containerPlacement ContainerPlacementPolicy
+   * @param eventPublisher EventPublisher
+   */
+  public ReplicationManager(final Configuration conf,
+final ContainerManager containerManager,
+

[GitHub] [hadoop] nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor ReplicationManager to handle QUASI_CLOSED contain…

2019-03-19 Thread GitBox
nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor 
ReplicationManager to handle QUASI_CLOSED contain…
URL: https://github.com/apache/hadoop/pull/620#discussion_r267072228
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
 ##
 @@ -0,0 +1,686 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.container;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.container.placement.algorithms
+.ContainerPlacementPolicy;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.lock.LockManager;
+import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
+import org.apache.hadoop.ozone.protocol.commands.DeleteContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.ReplicateContainerCommand;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Time;
+import org.apache.ratis.util.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Predicate;
+import java.util.stream.Collectors;
+
+/**
+ * Replication Manager (RM) is the one which is responsible for making sure
+ * that the containers are properly replicated. Replication Manager deals only
+ * with Quasi Closed / Closed container.
+ */
+public class ReplicationManager {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReplicationManager.class);
+
+  /**
+   * Reference to the ContainerManager.
+   */
+  private final ContainerManager containerManager;
+
+  /**
+   * PlacementPolicy which is used to identify where a container
+   * should be copied.
+   */
+  private final ContainerPlacementPolicy containerPlacement;
+
+  /**
+   * EventPublisher to fire Replicate and Delete container commands.
+   */
+  private final EventPublisher eventPublisher;
+
+  /**
+   * Used for locking a container with its ID while processing it.
+   */
+  private final LockManager lockManager;
+
+  /**
+   * This is used to track container replication commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightReplication;
+
+  /**
+   * This is used to track container deletion commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightDeletion;
+
+  /**
+   * ReplicationMonitor thread is the one which wakes up at configured
+   * interval and processes all the containers.
+   */
+  private final Thread replicationMonitor;
+
+  /**
+   * The frequency in which ReplicationMonitor thread should run.
+   */
+  private final long interval;
+
+  /**
+   * Timeout for container replication & deletion command issued by
+   * ReplicationManager.
+   */
+  private final long eventTimeout;
+
+  /**
+   * Flag used to check if ReplicationMonitor thread is running or not.
+   */
+  private volatile boolean running;
+
+  /**
+   * Constructs ReplicationManager instance with the given configuration.
+   *
+   * @param conf OzoneConfiguration
+   * @param containerManager ContainerManager
+   * @param containerPlacement ContainerPlacementPolicy
+   * @param eventPublisher EventPublisher
+   */
+  public ReplicationManager(final Configuration conf,
+final ContainerManager containerManager,
+

[GitHub] [hadoop] nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor ReplicationManager to handle QUASI_CLOSED contain…

2019-03-19 Thread GitBox
nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor 
ReplicationManager to handle QUASI_CLOSED contain…
URL: https://github.com/apache/hadoop/pull/620#discussion_r267071885
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
 ##
 @@ -131,6 +131,16 @@ public ContainerStateManager getContainerStateManager() {
 return containerStateManager;
   }
 
+  @Override
 
 Review comment:
   Fixed SCMContainerManager#getContainerIDs. Will create a new jira to fix 
SCMContainerManager#getContainers.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor ReplicationManager to handle QUASI_CLOSED contain…

2019-03-19 Thread GitBox
nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor 
ReplicationManager to handle QUASI_CLOSED contain…
URL: https://github.com/apache/hadoop/pull/620#discussion_r267072062
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
 ##
 @@ -0,0 +1,686 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.container;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.container.placement.algorithms
+.ContainerPlacementPolicy;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.lock.LockManager;
+import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
+import org.apache.hadoop.ozone.protocol.commands.DeleteContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.ReplicateContainerCommand;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Time;
+import org.apache.ratis.util.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Predicate;
+import java.util.stream.Collectors;
+
+/**
+ * Replication Manager (RM) is the one which is responsible for making sure
+ * that the containers are properly replicated. Replication Manager deals only
+ * with Quasi Closed / Closed container.
+ */
+public class ReplicationManager {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReplicationManager.class);
+
+  /**
+   * Reference to the ContainerManager.
+   */
+  private final ContainerManager containerManager;
+
+  /**
+   * PlacementPolicy which is used to identify where a container
+   * should be copied.
+   */
+  private final ContainerPlacementPolicy containerPlacement;
+
+  /**
+   * EventPublisher to fire Replicate and Delete container commands.
+   */
+  private final EventPublisher eventPublisher;
+
+  /**
+   * Used for locking a container with its ID while processing it.
+   */
+  private final LockManager lockManager;
+
+  /**
+   * This is used to track container replication commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightReplication;
+
+  /**
+   * This is used to track container deletion commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightDeletion;
+
+  /**
+   * ReplicationMonitor thread is the one which wakes up at configured
+   * interval and processes all the containers.
+   */
+  private final Thread replicationMonitor;
+
+  /**
+   * The frequency in which ReplicationMonitor thread should run.
+   */
+  private final long interval;
+
+  /**
+   * Timeout for container replication & deletion command issued by
+   * ReplicationManager.
+   */
+  private final long eventTimeout;
+
+  /**
+   * Flag used to check if ReplicationMonitor thread is running or not.
+   */
+  private volatile boolean running;
+
+  /**
+   * Constructs ReplicationManager instance with the given configuration.
+   *
+   * @param conf OzoneConfiguration
+   * @param containerManager ContainerManager
+   * @param containerPlacement ContainerPlacementPolicy
+   * @param eventPublisher EventPublisher
+   */
+  public ReplicationManager(final Configuration conf,
+final ContainerManager containerManager,
+

[GitHub] [hadoop] nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor ReplicationManager to handle QUASI_CLOSED contain…

2019-03-19 Thread GitBox
nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor 
ReplicationManager to handle QUASI_CLOSED contain…
URL: https://github.com/apache/hadoop/pull/620#discussion_r267071998
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
 ##
 @@ -0,0 +1,686 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.container;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.container.placement.algorithms
+.ContainerPlacementPolicy;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.lock.LockManager;
+import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
+import org.apache.hadoop.ozone.protocol.commands.DeleteContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.ReplicateContainerCommand;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Time;
+import org.apache.ratis.util.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Predicate;
+import java.util.stream.Collectors;
+
+/**
+ * Replication Manager (RM) is the one which is responsible for making sure
+ * that the containers are properly replicated. Replication Manager deals only
+ * with Quasi Closed / Closed container.
+ */
+public class ReplicationManager {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReplicationManager.class);
+
+  /**
+   * Reference to the ContainerManager.
+   */
+  private final ContainerManager containerManager;
+
+  /**
+   * PlacementPolicy which is used to identify where a container
+   * should be copied.
+   */
+  private final ContainerPlacementPolicy containerPlacement;
+
+  /**
+   * EventPublisher to fire Replicate and Delete container commands.
+   */
+  private final EventPublisher eventPublisher;
+
+  /**
+   * Used for locking a container with its ID while processing it.
+   */
+  private final LockManager lockManager;
+
+  /**
+   * This is used to track container replication commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightReplication;
+
+  /**
+   * This is used to track container deletion commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightDeletion;
+
+  /**
+   * ReplicationMonitor thread is the one which wakes up at configured
+   * interval and processes all the containers.
+   */
+  private final Thread replicationMonitor;
+
+  /**
+   * The frequency in which ReplicationMonitor thread should run.
+   */
+  private final long interval;
+
+  /**
+   * Timeout for container replication & deletion command issued by
+   * ReplicationManager.
+   */
+  private final long eventTimeout;
+
+  /**
+   * Flag used to check if ReplicationMonitor thread is running or not.
+   */
+  private volatile boolean running;
+
+  /**
+   * Constructs ReplicationManager instance with the given configuration.
+   *
+   * @param conf OzoneConfiguration
+   * @param containerManager ContainerManager
+   * @param containerPlacement ContainerPlacementPolicy
+   * @param eventPublisher EventPublisher
+   */
+  public ReplicationManager(final Configuration conf,
+final ContainerManager containerManager,
+

[GitHub] [hadoop] sahilTakiar commented on a change in pull request #600: HDFS-14348: Fix JNI exception handling issues in libhdfs

2019-03-19 Thread GitBox
sahilTakiar commented on a change in pull request #600: HDFS-14348: Fix JNI 
exception handling issues in libhdfs
URL: https://github.com/apache/hadoop/pull/600#discussion_r267065815
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c
 ##
 @@ -2498,6 +2498,11 @@ int hadoopRzOptionsSetByteBufferPool(
 return -1;
 }
 
+if (opts->byteBufferPool) {
+// Delete any previous ByteBufferPool we had.
+(*env)->DeleteGlobalRef(env, opts->byteBufferPool);
 
 Review comment:
   Done. Fixed a few other issues I found with this method while I was at it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #600: HDFS-14348: Fix JNI exception handling issues in libhdfs

2019-03-19 Thread GitBox
sahilTakiar commented on a change in pull request #600: HDFS-14348: Fix JNI 
exception handling issues in libhdfs
URL: https://github.com/apache/hadoop/pull/600#discussion_r267065622
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c
 ##
 @@ -2896,7 +2903,7 @@ hdfsGetHosts(hdfsFS fs, const char *path, tOffset start, 
tOffset length)
 for (i = 0; i < jNumFileBlocks; ++i) {
 jFileBlock =
 (*env)->GetObjectArrayElement(env, jBlockLocations, i);
-if (!jFileBlock) {
+if ((*env)->ExceptionOccurred || !jFileBlock) {
 
 Review comment:
   Yeah, it definitely should be. Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16186) NPE in ITestS3AFileSystemContract teardown in DynamoDBMetadataStore.lambda$listChildren

2019-03-19 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796456#comment-16796456
 ] 

Gabor Bota commented on HADOOP-16186:
-

We get an NPE at {{DynamoDBMetadataStore.java:653}} so at 
{{dirPathMeta.getLastUpdated()}} which means dirPathMeta is null.
 We get to that line, so the expression {{metas.isEmpty() && dirPathMeta == 
null}} evaulates to false, which means {{metas.isEmpty()}} should be false.

{{metas}} is the metadata for the files in that path whereas {{dirPathMeta}} is 
a directory metadata got from ddb AFTER getting the listing in the directory 
from ddb.
h3. Understanding what's the issue

So let's see when can {{metas.isEmpty()}} evaluated false (so there are items 
in the directory), but at the same time {{dirPathMeta}} is null (so there is no 
directory metadata for that path in dynamo):
 * Something can delete the directory metadata between those two calls: between 
getting the file metas for that path and getting the dir metadata. Race 
condition? Maybe: In the teardown the directory is deleted, so the files can be 
there (let's say in a corner case there's still 1 file left when the file metas 
are queried) but when we call for the directory meta it's deleted so we get a 
null.
 * There's no directory marker in ddb, but the files are there. That would be 
an implementation issue when we create directory markers.

h3. Proposed solution

Log if this happens with an error. Handle it with checking the file metas and 
dir meta separately.

> NPE in ITestS3AFileSystemContract teardown in  
> DynamoDBMetadataStore.lambda$listChildren
> 
>
> Key: HADOOP-16186
> URL: https://issues.apache.org/jira/browse/HADOOP-16186
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> Test run options. NPE in test teardown
> {code}
> -Dparallel-tests -DtestsThreadCount=6 -Ds3guard -Ddynamodb
> {code}
> If you look at the code, its *exactly* the place fixed in HADOOP-15827, a 
> change which HADOOP-15947 reverted. 
> There's clearly some codepath which can surface which is causing failures in 
> some situations, and having multiple patches switching between the && and || 
> operators isn't going to to fix it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16156) [Clean-up] Remove NULL check before instanceof and fix checkstyle in InnerNodeImpl

2019-03-19 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HADOOP-16156:

Attachment: HADOOP-16156.003.patch

> [Clean-up] Remove NULL check before instanceof and fix checkstyle in 
> InnerNodeImpl
> --
>
> Key: HADOOP-16156
> URL: https://issues.apache.org/jira/browse/HADOOP-16156
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HADOOP-16156.001.patch, HADOOP-16156.002.patch, 
> HADOOP-16156.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-19 Thread GitBox
toddlipcon commented on a change in pull request #595: HDFS-14304: High lock 
contention on hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#discussion_r267053799
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c
 ##
 @@ -534,6 +545,21 @@ JNIEnv* getJNIEnv(void)
 
 state->env = getGlobalJNIEnv();
 mutexUnlock();
+
+if (!jclassesInitialized) {
+  mutexLock();
 
 Review comment:
   I don't see a corresponding unlock for this


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-19 Thread GitBox
toddlipcon commented on a change in pull request #595: HDFS-14304: High lock 
contention on hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#discussion_r267053752
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c
 ##
 @@ -534,6 +545,21 @@ JNIEnv* getJNIEnv(void)
 
 state->env = getGlobalJNIEnv();
 mutexUnlock();
+
+if (!jclassesInitialized) {
+  mutexLock();
+  if (!jclassesInitialized) {
 
 Review comment:
   that said, this isn't a hot path so I don't think it's worth trying to be 
fancy, and I think we can just acquire the mutex unconditionally


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-19 Thread GitBox
toddlipcon commented on a change in pull request #595: HDFS-14304: High lock 
contention on hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#discussion_r267054193
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c
 ##
 @@ -534,6 +545,21 @@ JNIEnv* getJNIEnv(void)
 
 state->env = getGlobalJNIEnv();
 mutexUnlock();
+
+if (!jclassesInitialized) {
+  mutexLock();
+  if (!jclassesInitialized) {
+jthrowable jthr = NULL;
+jthr = initCachedClasses(state->env);
+if (jthr) {
+  printExceptionAndFree(state->env, jthr, PRINT_EXC_ALL,
+"initCachedClasses failed");
+  return NULL;
 
 Review comment:
   does this return (and the one above) miss some cleanup that should happen in 
the 'goto fail' label? (ugh I hate programming in C...)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-19 Thread GitBox
toddlipcon commented on a change in pull request #595: HDFS-14304: High lock 
contention on hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#discussion_r267053653
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c
 ##
 @@ -534,6 +545,21 @@ JNIEnv* getJNIEnv(void)
 
 state->env = getGlobalJNIEnv();
 mutexUnlock();
+
+if (!jclassesInitialized) {
+  mutexLock();
+  if (!jclassesInitialized) {
 
 Review comment:
   this double-checked locking idiom isn't safe unless you insert a memory 
barrier before the store to jclassesInitialized. (it has to be a store with 
"release" semantics)
   
   Otherwise, the compiler (and/or CPU) is free to decide to reorder the store 
to 'jclassesInitialized = 1' up earlier within the critical section.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-19 Thread GitBox
toddlipcon commented on a change in pull request #595: HDFS-14304: High lock 
contention on hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#discussion_r267052771
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jclasses.h
 ##
 @@ -0,0 +1,117 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef LIBHDFS_JCLASSES_H
+#define LIBHDFS_JCLASSES_H
+
+#include 
+
+/**
+ * Encapsulates logic to cache jclass objects so they can re-used across
+ * calls to FindClass. Creating jclass objects every time libhdfs has to
+ * invoke a method can hurt performance. By cacheing jclass objects we avoid
+ * this overhead.
+ *
+ * We use the term "cached" here loosely; jclasses are not truly cached,
+ * instead they are created once during JVM load and are kept alive until the
+ * process shutdowns. There is no eviction of jclass objects.
+ *
+ * @see https://www.ibm.com/developerworks/library/j-jni/index.html#notc
+ */
+
+/**
+ * Each enum value represents one jclass that is cached. Enum values should
+ * be passed to getJclass or getName to get the jclass object or class name
+ * represented by the enum value.
+ */
+typedef enum {
+JC_CONFIGURATION,
+JC_PATH,
+JC_FILE_SYSTEM,
+JC_FS_STATUS,
+JC_FILE_UTIL,
+JC_BLOCK_LOCATION,
+JC_DFS_HEDGED_READ_METRICS,
+JC_DISTRIBUTED_FILE_SYSTEM,
+JC_FS_DATA_INPUT_STREAM,
+JC_FS_DATA_OUTPUT_STREAM,
+JC_FILE_STATUS,
+JC_FS_PERMISSION,
+JC_READ_STATISTICS,
+JC_HDFS_DATA_INPUT_STREAM,
+JC_DOMAIN_SOCKET,
+JC_URI,
+JC_BYTE_BUFFER,
+JC_ENUM_SET,
+JC_EXCEPTION_UTILS,
+// A special marker enum that counts the number of cached jclasses
+NUM_CACHED_CLASSES
+} CachedJavaClass;
+
+/**
+ * Whether initCachedClasses has been called or not. Protected by the mutex
+ * jclassInitMutex.
+ */
+extern int jclassesInitialized;
 
 Review comment:
   Instead of exposing this externally from jclasses.h I think it'd be better 
to just unconditionally call initCachedClasses() and make it internally decide 
whether to no-op it. That's a cold code path (only once per thread) so avoiding 
the non-inlined function call isn't important


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-19 Thread GitBox
toddlipcon commented on a change in pull request #595: HDFS-14304: High lock 
contention on hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#discussion_r267055577
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c
 ##
 @@ -16,19 +16,17 @@
  * limitations under the License.
  */
 
+#include "jclasses.h"
 
 Review comment:
   nit: can you sort these includes?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] toddlipcon commented on a change in pull request #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-19 Thread GitBox
toddlipcon commented on a change in pull request #595: HDFS-14304: High lock 
contention on hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#discussion_r267052910
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jclasses.h
 ##
 @@ -0,0 +1,117 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef LIBHDFS_JCLASSES_H
+#define LIBHDFS_JCLASSES_H
+
+#include 
+
+/**
+ * Encapsulates logic to cache jclass objects so they can re-used across
+ * calls to FindClass. Creating jclass objects every time libhdfs has to
+ * invoke a method can hurt performance. By cacheing jclass objects we avoid
+ * this overhead.
+ *
+ * We use the term "cached" here loosely; jclasses are not truly cached,
+ * instead they are created once during JVM load and are kept alive until the
+ * process shutdowns. There is no eviction of jclass objects.
+ *
+ * @see https://www.ibm.com/developerworks/library/j-jni/index.html#notc
+ */
+
+/**
+ * Each enum value represents one jclass that is cached. Enum values should
+ * be passed to getJclass or getName to get the jclass object or class name
+ * represented by the enum value.
+ */
+typedef enum {
+JC_CONFIGURATION,
+JC_PATH,
+JC_FILE_SYSTEM,
+JC_FS_STATUS,
+JC_FILE_UTIL,
+JC_BLOCK_LOCATION,
+JC_DFS_HEDGED_READ_METRICS,
+JC_DISTRIBUTED_FILE_SYSTEM,
+JC_FS_DATA_INPUT_STREAM,
+JC_FS_DATA_OUTPUT_STREAM,
+JC_FILE_STATUS,
+JC_FS_PERMISSION,
+JC_READ_STATISTICS,
+JC_HDFS_DATA_INPUT_STREAM,
+JC_DOMAIN_SOCKET,
+JC_URI,
+JC_BYTE_BUFFER,
+JC_ENUM_SET,
+JC_EXCEPTION_UTILS,
+// A special marker enum that counts the number of cached jclasses
+NUM_CACHED_CLASSES
+} CachedJavaClass;
+
+/**
+ * Whether initCachedClasses has been called or not. Protected by the mutex
+ * jclassInitMutex.
+ */
+extern int jclassesInitialized;
+
+/**
+ * Internally initializes all jclass objects listed in the CachedJavaClass 
enum.
+ */
+jthrowable initCachedClasses(JNIEnv* env);
 
 Review comment:
   per above, I think we should document that this is idempotent and threadsafe 
(safe to call multiple times)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16156) [Clean-up] Remove NULL check before instanceof and fix checkstyle in InnerNodeImpl

2019-03-19 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796438#comment-16796438
 ] 

Shweta edited comment on HADOOP-16156 at 3/19/19 7:08 PM:
--

Thaks for the review [~templedf]. 

I realized how the functionality of the for loop in the code works and deleting 
the line was incorrect on my part. 
Uploaded patch v003.
Looks like the space before the closing brace for the line {code} if (index 
!=-1) { {code} was already present.

Added new line before the closing brace for {code} return children; } {code}


was (Author: shwetayakkali):
Thaks for the review [~templedf]. 

I realized how the functionality of the for loop in the code works and deleting 
the line was incorrect on my part. 

Looks like the space before the closing brace for the line {code} if (index 
!=-1) { {code} was already present.

Added new line before the closing brace for {code} return children; } {code}

> [Clean-up] Remove NULL check before instanceof and fix checkstyle in 
> InnerNodeImpl
> --
>
> Key: HADOOP-16156
> URL: https://issues.apache.org/jira/browse/HADOOP-16156
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HADOOP-16156.001.patch, HADOOP-16156.002.patch, 
> HADOOP-16156.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16156) [Clean-up] Remove NULL check before instanceof and fix checkstyle in InnerNodeImpl

2019-03-19 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796438#comment-16796438
 ] 

Shweta commented on HADOOP-16156:
-

Thaks for the review [~templedf]. 

I realized how the functionality of the for loop in the code works and deleting 
the line was incorrect on my part. 

Looks like the space before the closing brace for the line {code} if (index 
!=-1) { {code} was already present.

Added new line before the closing brace for {code} return children; } {code}

> [Clean-up] Remove NULL check before instanceof and fix checkstyle in 
> InnerNodeImpl
> --
>
> Key: HADOOP-16156
> URL: https://issues.apache.org/jira/browse/HADOOP-16156
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HADOOP-16156.001.patch, HADOOP-16156.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #622: HDDS-1307. Test ScmChillMode testChillModeOperations failed.

2019-03-19 Thread GitBox
hadoop-yetus commented on issue #622: HDDS-1307. Test ScmChillMode 
testChillModeOperations failed.
URL: https://github.com/apache/hadoop/pull/622#issuecomment-474519881
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1109 | trunk passed |
   | +1 | compile | 29 | trunk passed |
   | +1 | checkstyle | 20 | trunk passed |
   | +1 | mvnsite | 31 | trunk passed |
   | +1 | shadedclient | 754 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 0 | trunk passed |
   | +1 | javadoc | 17 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 37 | the patch passed |
   | +1 | compile | 27 | the patch passed |
   | +1 | javac | 27 | the patch passed |
   | +1 | checkstyle | 15 | the patch passed |
   | +1 | mvnsite | 27 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 792 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 0 | the patch passed |
   | +1 | javadoc | 15 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 882 | integration-test in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3908 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.om.TestScmChillMode |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.om.TestOzoneManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-622/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/622 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 52950ed4b5f8 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 992489c |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-622/2/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-622/2/testReport/ |
   | Max. process+thread count | 4432 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-622/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #577: HADOOP-16058 S3A to support terasort

2019-03-19 Thread GitBox
steveloughran commented on issue #577: HADOOP-16058 S3A to support terasort
URL: https://github.com/apache/hadoop/pull/577#issuecomment-474513274
 
 
   on trunk, the yarn smart-apply-patch works for me, provided I don't ask it 
to actually commit the PR
   
   ```
   dev-support/bin/smart-apply-patch --project=hadoop GH:577
   ```
   
   This will pull down the patch, apply it. doesn't seem to to work for older 
branches though...you'd probably need to update it everywhere to the latest 
yetus code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #577: HADOOP-16058 S3A to support terasort

2019-03-19 Thread GitBox
steveloughran commented on issue #577: HADOOP-16058 S3A to support terasort
URL: https://github.com/apache/hadoop/pull/577#issuecomment-474512521
 
 
   @ajfabbri 
   
   
   * you know for any pr, put .patch at the end and you get the patch
   * if you hit the "view command line instructions" you get the shell commands 
to checkout the branch. Probably best strategy for an evolving pr
   * And you can fetch the PR from the project by ID: `git fetch origin 
pull/577/head:BRANCHNAME`
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-19 Thread GitBox
hadoop-yetus commented on issue #595: HDFS-14304: High lock contention on 
hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#issuecomment-474510679
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 27 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1129 | trunk passed |
   | +1 | compile | 95 | trunk passed |
   | +1 | mvnsite | 21 | trunk passed |
   | +1 | shadedclient | 1896 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 15 | the patch passed |
   | +1 | compile | 93 | the patch passed |
   | +1 | cc | 93 | the patch passed |
   | +1 | javac | 93 | the patch passed |
   | +1 | mvnsite | 15 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 725 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 339 | hadoop-hdfs-native-client in the patch passed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 3272 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/595 |
   | JIRA Issue | HDFS-14304 |
   | Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
   | uname | Linux ffbd0e20f536 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 992489c |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/2/testReport/ |
   | Max. process+thread count | 445 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia commented on a change in pull request #603: HADOOP-16026:Replace incorrect use of system property user.name

2019-03-19 Thread GitBox
dineshchitlangia commented on a change in pull request #603: 
HADOOP-16026:Replace incorrect use of system property user.name
URL: https://github.com/apache/hadoop/pull/603#discussion_r267031594
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/SshFenceByTcpPort.java
 ##
 @@ -236,7 +237,13 @@ private int getSshConnectTimeout() {
 
 public Args(String arg) 
 throws BadFencingConfigurationException {
-  user = System.getProperty("user.name");
+  try {
+user = UserGroupInformation.getCurrentUser().getShortUserName();
 
 Review comment:
   @toddlipcon - Good catch. Now that you make this point, I too, am not 
convinced that it needs to be changed here. Let me request @jojochuang for his 
thoughts.
   Thanks for reviewing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16026) Replace incorrect use of system property user.name

2019-03-19 Thread Dinesh Chitlangia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HADOOP-16026:
---
Attachment: (was: HADOOP-16026.01.patch)

> Replace incorrect use of system property user.name
> --
>
> Key: HADOOP-16026
> URL: https://issues.apache.org/jira/browse/HADOOP-16026
> Project: Hadoop Common
>  Issue Type: Improvement
> Environment: Kerberized
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>
> This jira has been created to track the suggested changes for Hadoop Common 
> as identified in HDFS-14176
> Following occurrence need to be corrected:
>  Common/FileSystem L2233
>  Common/AbstractFileSystem L451
>  Common/SshFenceByTcpPort L239



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on issue #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-19 Thread GitBox
sahilTakiar commented on issue #595: HDFS-14304: High lock contention on 
hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#issuecomment-474486689
 
 
   @toddlipcon addressed your comments.
   
   Perhaps I should not be force pushing my changes, because you comments seem 
to have disappeared from the PR landing page, but you can still seem them (and 
my responses) here: 
https://github.com/apache/hadoop/commit/05a31caf25a1ce844f1a0d7b585b2e0dc1b2412f


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-03-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796276#comment-16796276
 ] 

Hadoop QA commented on HADOOP-16152:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
8s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 37s{color} 
| {color:red} root generated 2 new + 1482 unchanged - 0 fixed = 1484 total (was 
1482) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m 
53s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
59s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16152 |
| JIRA Patch URL | 

[GitHub] [hadoop] xiaoyuyao merged pull request #615: HDDS-1215. Change hadoop-runner and apache/hadoop base image to use J…

2019-03-19 Thread GitBox
xiaoyuyao merged pull request #615: HDDS-1215. Change hadoop-runner and 
apache/hadoop base image to use J…
URL: https://github.com/apache/hadoop/pull/615
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16172) Update apache/hadoop:3 to 3.2.0 release

2019-03-19 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HADOOP-16172:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Update apache/hadoop:3 to 3.2.0 release
> ---
>
> Key: HADOOP-16172
> URL: https://issues.apache.org/jira/browse/HADOOP-16172
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-16172-docker-hadoop-3.01.patch
>
>
> This ticket is opened to update apache/hadoop:3 from 3.1.1 to 3.2.0 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16172) Update apache/hadoop:3 to 3.2.0 release

2019-03-19 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796267#comment-16796267
 ] 

Elek, Marton commented on HADOOP-16172:
---

+1 thanks to update this.

Will commit to the docker-hadoop-3 branch soon.

> Update apache/hadoop:3 to 3.2.0 release
> ---
>
> Key: HADOOP-16172
> URL: https://issues.apache.org/jira/browse/HADOOP-16172
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HADOOP-16172-docker-hadoop-3.01.patch
>
>
> This ticket is opened to update apache/hadoop:3 from 3.1.1 to 3.2.0 release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16196) Path Parameterize Comparable

2019-03-19 Thread David Mollitor (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796231#comment-16796231
 ] 

David Mollitor commented on HADOOP-16196:
-

[~ste...@apache.org] Thanks Steve.  Patch updated.  Please consider for 
inclusion into the project.

> Path Parameterize Comparable
> 
>
> Key: HADOOP-16196
> URL: https://issues.apache.org/jira/browse/HADOOP-16196
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HADOOP-16196.1.patch, HADOOP-16196.2.patch, 
> HADOOP-16196.3.patch
>
>
> The {{Path}} class implements {{Comparable}} which is now a parameterized 
> class.
> https://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html
> Make {{Path}} parameterized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #624: HADOOP-15999. S3Guard: Better support for out-of-band operations

2019-03-19 Thread GitBox
hadoop-yetus commented on issue #624: HADOOP-15999. S3Guard: Better support for 
out-of-band operations
URL: https://github.com/apache/hadoop/pull/624#issuecomment-474423739
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 983 | trunk passed |
   | +1 | compile | 34 | trunk passed |
   | +1 | checkstyle | 24 | trunk passed |
   | +1 | mvnsite | 34 | trunk passed |
   | +1 | shadedclient | 711 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 45 | trunk passed |
   | +1 | javadoc | 25 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 30 | the patch passed |
   | +1 | compile | 29 | the patch passed |
   | +1 | javac | 29 | the patch passed |
   | +1 | checkstyle | 19 | the patch passed |
   | +1 | mvnsite | 33 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 741 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 47 | the patch passed |
   | +1 | javadoc | 17 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 279 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 23 | The patch does not generate ASF License warnings. |
   | | | 3162 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-624/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/624 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux f4304e1c482e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / c9e50c4 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-624/1/testReport/ |
   | Max. process+thread count | 445 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-624/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16196) Path Parameterize Comparable

2019-03-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796169#comment-16796169
 ] 

Hadoop QA commented on HADOOP-16196:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
46s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m  
8s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16196 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12962978/HADOOP-16196.3.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ba7f68020a65 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c9e50c4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test 

[jira] [Updated] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-03-19 Thread Yuming Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16152:
-
Attachment: HADOOP-16152.v1.patch
Status: Patch Available  (was: Open)

Update jetty to 9.4.x according to 
https://www.eclipse.org/jetty/documentation/9.4.x/upgrading-jetty.html

> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16152.v1.patch
>
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> [https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141]
> Calcite: 
> [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
> Hive: https://issues.apache.org/jira/browse/HIVE-21211



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-16186) NPE in ITestS3AFileSystemContract teardown in DynamoDBMetadataStore.lambda$listChildren

2019-03-19 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16186 started by Gabor Bota.
---
> NPE in ITestS3AFileSystemContract teardown in  
> DynamoDBMetadataStore.lambda$listChildren
> 
>
> Key: HADOOP-16186
> URL: https://issues.apache.org/jira/browse/HADOOP-16186
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> Test run options. NPE in test teardown
> {code}
> -Dparallel-tests -DtestsThreadCount=6 -Ds3guard -Ddynamodb
> {code}
> If you look at the code, its *exactly* the place fixed in HADOOP-15827, a 
> change which HADOOP-15947 reverted. 
> There's clearly some codepath which can surface which is causing failures in 
> some situations, and having multiple patches switching between the && and || 
> operators isn't going to to fix it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15999) S3Guard: Better support for out-of-band operations

2019-03-19 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796148#comment-16796148
 ] 

Gabor Bota commented on HADOOP-15999:
-

updated pull request with the directory skipping: 
[https://github.com/apache/hadoop/pull/624]

successful itests run against Ireland 

> S3Guard: Better support for out-of-band operations
> --
>
> Key: HADOOP-15999
> URL: https://issues.apache.org/jira/browse/HADOOP-15999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15999-007.patch, HADOOP-15999.001.patch, 
> HADOOP-15999.002.patch, HADOOP-15999.003.patch, HADOOP-15999.004.patch, 
> HADOOP-15999.005.patch, HADOOP-15999.006.patch, HADOOP-15999.008.patch, 
> HADOOP-15999.009.patch, out-of-band-operations.patch
>
>
> S3Guard was initially done on the premise that a new MetadataStore would be 
> the source of truth, and that it wouldn't provide guarantees if updates were 
> done without using S3Guard.
> I've been seeing increased demand for better support for scenarios where 
> operations are done on the data that can't reasonably be done with S3Guard 
> involved. For example:
> * A file is deleted using S3Guard, and replaced by some other tool. S3Guard 
> can't tell the difference between the new file and delete / list 
> inconsistency and continues to treat the file as deleted.
> * An S3Guard-ed file is overwritten by a longer file by some other tool. When 
> reading the file, only the length of the original file is read.
> We could possibly have smarter behavior here by querying both S3 and the 
> MetadataStore (even in cases where we may currently only query the 
> MetadataStore in getFileStatus) and use whichever one has the higher modified 
> time.
> This kills the performance boost we currently get in some workloads with the 
> short-circuited getFileStatus, but we could keep it with authoritative mode 
> which should give a larger performance boost. At least we'd get more 
> correctness without authoritative mode and a clear declaration of when we can 
> make the assumptions required to short-circuit the process. If we can't 
> consider S3Guard the source of truth, we need to defer to S3 more.
> We'd need to be extra sure of any locality / time zone issues if we start 
> relying on mod_time more directly, but currently we're tracking the 
> modification time as returned by S3 anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg opened a new pull request #624: HADOOP-15999. S3Guard: Better support for out-of-band operations

2019-03-19 Thread GitBox
bgaborg opened a new pull request #624: HADOOP-15999. S3Guard: Better support 
for out-of-band operations
URL: https://github.com/apache/hadoop/pull/624
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg closed pull request #607: HADOOP-15999. S3Guard: Better support for out-of-band operations

2019-03-19 Thread GitBox
bgaborg closed pull request #607: HADOOP-15999. S3Guard: Better support for 
out-of-band operations
URL: https://github.com/apache/hadoop/pull/607
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16196) Path Parameterize Comparable

2019-03-19 Thread David Mollitor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-16196:

Status: Open  (was: Patch Available)

> Path Parameterize Comparable
> 
>
> Key: HADOOP-16196
> URL: https://issues.apache.org/jira/browse/HADOOP-16196
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HADOOP-16196.1.patch, HADOOP-16196.2.patch, 
> HADOOP-16196.3.patch
>
>
> The {{Path}} class implements {{Comparable}} which is now a parameterized 
> class.
> https://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html
> Make {{Path}} parameterized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16196) Path Parameterize Comparable

2019-03-19 Thread David Mollitor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-16196:

Status: Patch Available  (was: Open)

> Path Parameterize Comparable
> 
>
> Key: HADOOP-16196
> URL: https://issues.apache.org/jira/browse/HADOOP-16196
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HADOOP-16196.1.patch, HADOOP-16196.2.patch, 
> HADOOP-16196.3.patch
>
>
> The {{Path}} class implements {{Comparable}} which is now a parameterized 
> class.
> https://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html
> Make {{Path}} parameterized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16196) Path Parameterize Comparable

2019-03-19 Thread David Mollitor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-16196:

Attachment: HADOOP-16196.3.patch

> Path Parameterize Comparable
> 
>
> Key: HADOOP-16196
> URL: https://issues.apache.org/jira/browse/HADOOP-16196
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HADOOP-16196.1.patch, HADOOP-16196.2.patch, 
> HADOOP-16196.3.patch
>
>
> The {{Path}} class implements {{Comparable}} which is now a parameterized 
> class.
> https://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html
> Make {{Path}} parameterized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16090) S3A Client to add explicit support for versioned stores

2019-03-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795967#comment-16795967
 ] 

Steve Loughran commented on HADOOP-16090:
-

BTW, in future, with a move to async operations, I could imagine that write 
context doing the scan for parent dirs happening asynchronously with the file 
write. Assuming the write takes less than a few seconds, and 200ms per probe, 
if you have enough httpclient connections there'd be no delay at the end of the 
write at all. 

Some risk of multiple writes to same dir finding entries which will be deleted 
anyway, so spurious tombstones created. Unless before that final bulk delete 
did a getObjectMetada call on the first marker believed to exist, and, if 
found, assume there's been a bulk delete already

> S3A Client to add explicit support for versioned stores
> ---
>
> Key: HADOOP-16090
> URL: https://issues.apache.org/jira/browse/HADOOP-16090
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.1
>Reporter: Dmitri Chmelev
>Assignee: Steve Loughran
>Priority: Minor
>
> The fix to avoid calls to getFileStatus() for each path component in 
> deleteUnnecessaryFakeDirectories() (HADOOP-13164) results in accumulation of 
> delete markers in versioned S3 buckets. The above patch replaced 
> getFileStatus() checks with a single batch delete request formed by 
> generating all ancestor keys formed from a given path. Since the delete 
> request is not checking for existence of fake directories, it will create a 
> delete marker for every path component that did not exist (or was previously 
> deleted). Note that issuing a DELETE request without specifying a version ID 
> will always create a new delete marker, even if one already exists ([AWS S3 
> Developer 
> Guide|https://docs.aws.amazon.com/AmazonS3/latest/dev/RemDelMarker.html])
> Since deleteUnnecessaryFakeDirectories() is called as a callback on 
> successful writes and on renames, delete markers accumulate rather quickly 
> and their rate of accumulation is inversely proportional to the depth of the 
> path. In other words, directories closer to the root will have more delete 
> markers than the leaves.
> This behavior negatively impacts performance of getFileStatus() operation 
> when it has to issue listObjects() request (especially v1) as the delete 
> markers have to be examined when the request searches for first current 
> non-deleted version of an object following a given prefix.
> I did a quick comparison against 3.x and the issue is still present: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L2947|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L2947]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16090) S3A Client to add explicit support for versioned stores

2019-03-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795964#comment-16795964
 ] 

Steve Loughran commented on HADOOP-16090:
-

I'm dependent on other people's reviews, so a couple-of-days was going to be 
the bare minimum anyway

* love to see what you've been doing
* assume that I'll be backporting the invoker stuff to branch-2 (but not the 
current use throughout branch-3.1+)
* refactoring getFileStatus. Hmm. Popular entry point. Slow enough over 
long-haul you can see the logs pausing.

I'm actually sketching out (but yet to publish) something about incremental 
refactoring of the S3A code itself: the S3AFileSystem class has got over 
complex.

Something like

* {{org.apache.hadoop.fs.s3a.impl.S3Core}}:  S3-api level, with state coming in 
as context, link to executor pool
* {{org.apache.hadoop.fs.s3a.impl.S3AModel}}: the filesystem model atop the 
core: S3Guard, directory trees etc.
* {{S3AFileSytem}}: Hadoop API

Core doesn't call model; model doesn't call FS. 

At the same time, no grand "break all backporting" patch. 

Anyhow, assume getObjectMetadata is the lowest-level, but in the S3AModel it'd 
be going through S3Guard to, in future, pick up version info from there as/when 
collected. If you've not been keeping an eye on things. input streams are now 
version aware too.



> S3A Client to add explicit support for versioned stores
> ---
>
> Key: HADOOP-16090
> URL: https://issues.apache.org/jira/browse/HADOOP-16090
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.1
>Reporter: Dmitri Chmelev
>Assignee: Steve Loughran
>Priority: Minor
>
> The fix to avoid calls to getFileStatus() for each path component in 
> deleteUnnecessaryFakeDirectories() (HADOOP-13164) results in accumulation of 
> delete markers in versioned S3 buckets. The above patch replaced 
> getFileStatus() checks with a single batch delete request formed by 
> generating all ancestor keys formed from a given path. Since the delete 
> request is not checking for existence of fake directories, it will create a 
> delete marker for every path component that did not exist (or was previously 
> deleted). Note that issuing a DELETE request without specifying a version ID 
> will always create a new delete marker, even if one already exists ([AWS S3 
> Developer 
> Guide|https://docs.aws.amazon.com/AmazonS3/latest/dev/RemDelMarker.html])
> Since deleteUnnecessaryFakeDirectories() is called as a callback on 
> successful writes and on renames, delete markers accumulate rather quickly 
> and their rate of accumulation is inversely proportional to the depth of the 
> path. In other words, directories closer to the root will have more delete 
> markers than the leaves.
> This behavior negatively impacts performance of getFileStatus() operation 
> when it has to issue listObjects() request (especially v1) as the delete 
> markers have to be examined when the request searches for first current 
> non-deleted version of an object following a given prefix.
> I did a quick comparison against 3.x and the issue is still present: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L2947|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L2947]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16196) Path Parameterize Comparable

2019-03-19 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795954#comment-16795954
 ] 

Steve Loughran commented on HADOOP-16196:
-

I would, just to keep that diff down and so easier to cherry pick

> Path Parameterize Comparable
> 
>
> Key: HADOOP-16196
> URL: https://issues.apache.org/jira/browse/HADOOP-16196
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HADOOP-16196.1.patch, HADOOP-16196.2.patch
>
>
> The {{Path}} class implements {{Comparable}} which is now a parameterized 
> class.
> https://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html
> Make {{Path}} parameterized.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16198) Upgrade Jackson-databind version to 2.9.8

2019-03-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16198:

Component/s: build

> Upgrade Jackson-databind version to 2.9.8
> -
>
> Key: HADOOP-16198
> URL: https://issues.apache.org/jira/browse/HADOOP-16198
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.2.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Jackson-databind 2.9.8 has a few fixes which are important to include.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16198) Upgrade Jackson-databind version to 2.9.8

2019-03-19 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16198:

Affects Version/s: 3.2.0

> Upgrade Jackson-databind version to 2.9.8
> -
>
> Key: HADOOP-16198
> URL: https://issues.apache.org/jira/browse/HADOOP-16198
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Jackson-databind 2.9.8 has a few fixes which are important to include.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor ReplicationManager to handle QUASI_CLOSED contain…

2019-03-19 Thread GitBox
nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor 
ReplicationManager to handle QUASI_CLOSED contain…
URL: https://github.com/apache/hadoop/pull/620#discussion_r266797761
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
 ##
 @@ -0,0 +1,686 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.container;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.container.placement.algorithms
+.ContainerPlacementPolicy;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.lock.LockManager;
+import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
+import org.apache.hadoop.ozone.protocol.commands.DeleteContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.ReplicateContainerCommand;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Time;
+import org.apache.ratis.util.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Predicate;
+import java.util.stream.Collectors;
+
+/**
+ * Replication Manager (RM) is the one which is responsible for making sure
+ * that the containers are properly replicated. Replication Manager deals only
+ * with Quasi Closed / Closed container.
+ */
+public class ReplicationManager {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReplicationManager.class);
+
+  /**
+   * Reference to the ContainerManager.
+   */
+  private final ContainerManager containerManager;
+
+  /**
+   * PlacementPolicy which is used to identify where a container
+   * should be copied.
+   */
+  private final ContainerPlacementPolicy containerPlacement;
+
+  /**
+   * EventPublisher to fire Replicate and Delete container commands.
+   */
+  private final EventPublisher eventPublisher;
+
+  /**
+   * Used for locking a container with its ID while processing it.
+   */
+  private final LockManager lockManager;
+
+  /**
+   * This is used to track container replication commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightReplication;
+
+  /**
+   * This is used to track container deletion commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightDeletion;
+
+  /**
+   * ReplicationMonitor thread is the one which wakes up at configured
+   * interval and processes all the containers.
+   */
+  private final Thread replicationMonitor;
+
+  /**
+   * The frequency in which ReplicationMonitor thread should run.
+   */
+  private final long interval;
+
+  /**
+   * Timeout for container replication & deletion command issued by
+   * ReplicationManager.
+   */
+  private final long eventTimeout;
+
+  /**
+   * Flag used to check if ReplicationMonitor thread is running or not.
+   */
+  private volatile boolean running;
+
+  /**
+   * Constructs ReplicationManager instance with the given configuration.
+   *
+   * @param conf OzoneConfiguration
+   * @param containerManager ContainerManager
+   * @param containerPlacement ContainerPlacementPolicy
+   * @param eventPublisher EventPublisher
+   */
+  public ReplicationManager(final Configuration conf,
+final ContainerManager containerManager,
+

[GitHub] [hadoop] nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor ReplicationManager to handle QUASI_CLOSED contain…

2019-03-19 Thread GitBox
nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor 
ReplicationManager to handle QUASI_CLOSED contain…
URL: https://github.com/apache/hadoop/pull/620#discussion_r266749114
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
 ##
 @@ -0,0 +1,686 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.container;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.container.placement.algorithms
+.ContainerPlacementPolicy;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.lock.LockManager;
+import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
+import org.apache.hadoop.ozone.protocol.commands.DeleteContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.ReplicateContainerCommand;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Time;
+import org.apache.ratis.util.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Predicate;
+import java.util.stream.Collectors;
+
+/**
+ * Replication Manager (RM) is the one which is responsible for making sure
+ * that the containers are properly replicated. Replication Manager deals only
+ * with Quasi Closed / Closed container.
+ */
+public class ReplicationManager {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReplicationManager.class);
+
+  /**
+   * Reference to the ContainerManager.
+   */
+  private final ContainerManager containerManager;
+
+  /**
+   * PlacementPolicy which is used to identify where a container
+   * should be copied.
+   */
+  private final ContainerPlacementPolicy containerPlacement;
+
+  /**
+   * EventPublisher to fire Replicate and Delete container commands.
+   */
+  private final EventPublisher eventPublisher;
+
+  /**
+   * Used for locking a container with its ID while processing it.
+   */
+  private final LockManager lockManager;
+
+  /**
+   * This is used to track container replication commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightReplication;
+
+  /**
+   * This is used to track container deletion commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightDeletion;
+
+  /**
+   * ReplicationMonitor thread is the one which wakes up at configured
+   * interval and processes all the containers.
+   */
+  private final Thread replicationMonitor;
+
+  /**
+   * The frequency in which ReplicationMonitor thread should run.
+   */
+  private final long interval;
+
+  /**
+   * Timeout for container replication & deletion command issued by
+   * ReplicationManager.
+   */
+  private final long eventTimeout;
+
+  /**
+   * Flag used to check if ReplicationMonitor thread is running or not.
+   */
+  private volatile boolean running;
+
+  /**
+   * Constructs ReplicationManager instance with the given configuration.
+   *
+   * @param conf OzoneConfiguration
+   * @param containerManager ContainerManager
+   * @param containerPlacement ContainerPlacementPolicy
+   * @param eventPublisher EventPublisher
+   */
+  public ReplicationManager(final Configuration conf,
+final ContainerManager containerManager,
+

[GitHub] [hadoop] nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor ReplicationManager to handle QUASI_CLOSED contain…

2019-03-19 Thread GitBox
nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor 
ReplicationManager to handle QUASI_CLOSED contain…
URL: https://github.com/apache/hadoop/pull/620#discussion_r266749114
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
 ##
 @@ -0,0 +1,686 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.container;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.container.placement.algorithms
+.ContainerPlacementPolicy;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.lock.LockManager;
+import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
+import org.apache.hadoop.ozone.protocol.commands.DeleteContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.ReplicateContainerCommand;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Time;
+import org.apache.ratis.util.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Predicate;
+import java.util.stream.Collectors;
+
+/**
+ * Replication Manager (RM) is the one which is responsible for making sure
+ * that the containers are properly replicated. Replication Manager deals only
+ * with Quasi Closed / Closed container.
+ */
+public class ReplicationManager {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReplicationManager.class);
+
+  /**
+   * Reference to the ContainerManager.
+   */
+  private final ContainerManager containerManager;
+
+  /**
+   * PlacementPolicy which is used to identify where a container
+   * should be copied.
+   */
+  private final ContainerPlacementPolicy containerPlacement;
+
+  /**
+   * EventPublisher to fire Replicate and Delete container commands.
+   */
+  private final EventPublisher eventPublisher;
+
+  /**
+   * Used for locking a container with its ID while processing it.
+   */
+  private final LockManager lockManager;
+
+  /**
+   * This is used to track container replication commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightReplication;
+
+  /**
+   * This is used to track container deletion commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightDeletion;
+
+  /**
+   * ReplicationMonitor thread is the one which wakes up at configured
+   * interval and processes all the containers.
+   */
+  private final Thread replicationMonitor;
+
+  /**
+   * The frequency in which ReplicationMonitor thread should run.
+   */
+  private final long interval;
+
+  /**
+   * Timeout for container replication & deletion command issued by
+   * ReplicationManager.
+   */
+  private final long eventTimeout;
+
+  /**
+   * Flag used to check if ReplicationMonitor thread is running or not.
+   */
+  private volatile boolean running;
+
+  /**
+   * Constructs ReplicationManager instance with the given configuration.
+   *
+   * @param conf OzoneConfiguration
+   * @param containerManager ContainerManager
+   * @param containerPlacement ContainerPlacementPolicy
+   * @param eventPublisher EventPublisher
+   */
+  public ReplicationManager(final Configuration conf,
+final ContainerManager containerManager,
+

[GitHub] [hadoop] nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor ReplicationManager to handle QUASI_CLOSED contain…

2019-03-19 Thread GitBox
nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor 
ReplicationManager to handle QUASI_CLOSED contain…
URL: https://github.com/apache/hadoop/pull/620#discussion_r266748832
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
 ##
 @@ -0,0 +1,686 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.container;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.container.placement.algorithms
+.ContainerPlacementPolicy;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.lock.LockManager;
+import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
+import org.apache.hadoop.ozone.protocol.commands.DeleteContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.ReplicateContainerCommand;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Time;
+import org.apache.ratis.util.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Predicate;
+import java.util.stream.Collectors;
+
+/**
+ * Replication Manager (RM) is the one which is responsible for making sure
+ * that the containers are properly replicated. Replication Manager deals only
+ * with Quasi Closed / Closed container.
+ */
+public class ReplicationManager {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReplicationManager.class);
+
+  /**
+   * Reference to the ContainerManager.
+   */
+  private final ContainerManager containerManager;
+
+  /**
+   * PlacementPolicy which is used to identify where a container
+   * should be copied.
+   */
+  private final ContainerPlacementPolicy containerPlacement;
+
+  /**
+   * EventPublisher to fire Replicate and Delete container commands.
+   */
+  private final EventPublisher eventPublisher;
+
+  /**
+   * Used for locking a container with its ID while processing it.
+   */
+  private final LockManager lockManager;
+
+  /**
+   * This is used to track container replication commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightReplication;
+
+  /**
+   * This is used to track container deletion commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightDeletion;
+
+  /**
+   * ReplicationMonitor thread is the one which wakes up at configured
+   * interval and processes all the containers.
+   */
+  private final Thread replicationMonitor;
+
+  /**
+   * The frequency in which ReplicationMonitor thread should run.
+   */
+  private final long interval;
+
+  /**
+   * Timeout for container replication & deletion command issued by
+   * ReplicationManager.
+   */
+  private final long eventTimeout;
+
+  /**
+   * Flag used to check if ReplicationMonitor thread is running or not.
+   */
+  private volatile boolean running;
+
+  /**
+   * Constructs ReplicationManager instance with the given configuration.
+   *
+   * @param conf OzoneConfiguration
+   * @param containerManager ContainerManager
+   * @param containerPlacement ContainerPlacementPolicy
+   * @param eventPublisher EventPublisher
+   */
+  public ReplicationManager(final Configuration conf,
+final ContainerManager containerManager,
+

[GitHub] [hadoop] nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor ReplicationManager to handle QUASI_CLOSED contain…

2019-03-19 Thread GitBox
nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor 
ReplicationManager to handle QUASI_CLOSED contain…
URL: https://github.com/apache/hadoop/pull/620#discussion_r266748568
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
 ##
 @@ -0,0 +1,686 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.container;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.container.placement.algorithms
+.ContainerPlacementPolicy;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.lock.LockManager;
+import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
+import org.apache.hadoop.ozone.protocol.commands.DeleteContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.ReplicateContainerCommand;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Time;
+import org.apache.ratis.util.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Predicate;
+import java.util.stream.Collectors;
+
+/**
+ * Replication Manager (RM) is the one which is responsible for making sure
+ * that the containers are properly replicated. Replication Manager deals only
+ * with Quasi Closed / Closed container.
+ */
+public class ReplicationManager {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReplicationManager.class);
+
+  /**
+   * Reference to the ContainerManager.
+   */
+  private final ContainerManager containerManager;
+
+  /**
+   * PlacementPolicy which is used to identify where a container
+   * should be copied.
+   */
+  private final ContainerPlacementPolicy containerPlacement;
+
+  /**
+   * EventPublisher to fire Replicate and Delete container commands.
+   */
+  private final EventPublisher eventPublisher;
+
+  /**
+   * Used for locking a container with its ID while processing it.
+   */
+  private final LockManager lockManager;
+
+  /**
+   * This is used to track container replication commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightReplication;
+
+  /**
+   * This is used to track container deletion commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightDeletion;
+
+  /**
+   * ReplicationMonitor thread is the one which wakes up at configured
+   * interval and processes all the containers.
+   */
+  private final Thread replicationMonitor;
+
+  /**
+   * The frequency in which ReplicationMonitor thread should run.
+   */
+  private final long interval;
+
+  /**
+   * Timeout for container replication & deletion command issued by
+   * ReplicationManager.
+   */
+  private final long eventTimeout;
+
+  /**
+   * Flag used to check if ReplicationMonitor thread is running or not.
+   */
+  private volatile boolean running;
+
+  /**
+   * Constructs ReplicationManager instance with the given configuration.
+   *
+   * @param conf OzoneConfiguration
+   * @param containerManager ContainerManager
+   * @param containerPlacement ContainerPlacementPolicy
+   * @param eventPublisher EventPublisher
+   */
+  public ReplicationManager(final Configuration conf,
+final ContainerManager containerManager,
+

[GitHub] [hadoop] nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor ReplicationManager to handle QUASI_CLOSED contain…

2019-03-19 Thread GitBox
nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor 
ReplicationManager to handle QUASI_CLOSED contain…
URL: https://github.com/apache/hadoop/pull/620#discussion_r266747949
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
 ##
 @@ -0,0 +1,686 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.container;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.container.placement.algorithms
+.ContainerPlacementPolicy;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.lock.LockManager;
+import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
+import org.apache.hadoop.ozone.protocol.commands.DeleteContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.ReplicateContainerCommand;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Time;
+import org.apache.ratis.util.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Predicate;
+import java.util.stream.Collectors;
+
+/**
+ * Replication Manager (RM) is the one which is responsible for making sure
+ * that the containers are properly replicated. Replication Manager deals only
+ * with Quasi Closed / Closed container.
+ */
+public class ReplicationManager {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReplicationManager.class);
+
+  /**
+   * Reference to the ContainerManager.
+   */
+  private final ContainerManager containerManager;
+
+  /**
+   * PlacementPolicy which is used to identify where a container
+   * should be copied.
+   */
+  private final ContainerPlacementPolicy containerPlacement;
+
+  /**
+   * EventPublisher to fire Replicate and Delete container commands.
+   */
+  private final EventPublisher eventPublisher;
+
+  /**
+   * Used for locking a container with its ID while processing it.
+   */
+  private final LockManager lockManager;
+
+  /**
+   * This is used to track container replication commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightReplication;
+
+  /**
+   * This is used to track container deletion commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightDeletion;
+
+  /**
+   * ReplicationMonitor thread is the one which wakes up at configured
+   * interval and processes all the containers.
+   */
+  private final Thread replicationMonitor;
+
+  /**
+   * The frequency in which ReplicationMonitor thread should run.
+   */
+  private final long interval;
+
+  /**
+   * Timeout for container replication & deletion command issued by
+   * ReplicationManager.
+   */
+  private final long eventTimeout;
+
+  /**
+   * Flag used to check if ReplicationMonitor thread is running or not.
+   */
+  private volatile boolean running;
+
+  /**
+   * Constructs ReplicationManager instance with the given configuration.
+   *
+   * @param conf OzoneConfiguration
+   * @param containerManager ContainerManager
+   * @param containerPlacement ContainerPlacementPolicy
+   * @param eventPublisher EventPublisher
+   */
+  public ReplicationManager(final Configuration conf,
+final ContainerManager containerManager,
+

[GitHub] [hadoop] hadoop-yetus commented on issue #622: HDDS-1307. Test ScmChillMode testChillModeOperations failed.

2019-03-19 Thread GitBox
hadoop-yetus commented on issue #622: HDDS-1307. Test ScmChillMode 
testChillModeOperations failed.
URL: https://github.com/apache/hadoop/pull/622#issuecomment-474216410
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 976 | trunk passed |
   | +1 | compile | 51 | trunk passed |
   | +1 | checkstyle | 16 | trunk passed |
   | +1 | mvnsite | 27 | trunk passed |
   | +1 | shadedclient | 655 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 0 | trunk passed |
   | +1 | javadoc | 14 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 29 | the patch passed |
   | +1 | compile | 21 | the patch passed |
   | +1 | javac | 21 | the patch passed |
   | -0 | checkstyle | 15 | hadoop-ozone/integration-test: The patch generated 
1 new + 0 unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 28 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 707 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 0 | the patch passed |
   | +1 | javadoc | 16 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 1138 | integration-test in the patch failed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 3817 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.scm.TestXceiverClientManager |
   |   | hadoop.ozone.om.TestOmBlockVersioning |
   |   | hadoop.ozone.om.TestContainerReportWithKeys |
   |   | hadoop.ozone.TestContainerOperations |
   |   | hadoop.hdds.scm.container.TestContainerStateManagerIntegration |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.scm.TestGetCommittedBlockLengthAndPutKey |
   |   | hadoop.ozone.scm.pipeline.TestSCMPipelineMetrics |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestDeleteContainerHandler
 |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.web.client.TestVolume |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   |   | hadoop.ozone.ozShell.TestS3Shell |
   |   | hadoop.ozone.web.client.TestBuckets |
   |   | hadoop.ozone.ozShell.TestOzoneDatanodeShell |
   |   | hadoop.ozone.web.TestOzoneRestWithMiniCluster |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.scm.TestXceiverClientMetrics |
   |   | hadoop.ozone.web.client.TestOzoneClient |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.om.TestMultipleContainerReadWrite |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerHandler
 |
   |   | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.om.TestOmMetrics |
   |   | hadoop.ozone.scm.TestContainerSmallFile |
   |   | hadoop.ozone.om.TestOzoneManagerConfiguration |
   |   | hadoop.ozone.scm.TestSCMMXBean |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestOmAcls |
   |   | hadoop.ozone.scm.pipeline.TestPipelineManagerMXBean |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.om.TestOMDbCheckpointServlet |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.hdds.scm.pipeline.TestPipelineClose |
   |   | hadoop.ozone.web.TestOzoneVolumes |
   |   | hadoop.ozone.om.TestOzoneManagerRestInterface |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.scm.TestAllocateContainer |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.om.TestOmInit |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.web.TestOzoneWebAccess |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.scm.node.TestSCMNodeMetrics |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 

[GitHub] [hadoop] nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor ReplicationManager to handle QUASI_CLOSED contain…

2019-03-19 Thread GitBox
nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor 
ReplicationManager to handle QUASI_CLOSED contain…
URL: https://github.com/apache/hadoop/pull/620#discussion_r266745168
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
 ##
 @@ -0,0 +1,686 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.container;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.container.placement.algorithms
+.ContainerPlacementPolicy;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.lock.LockManager;
+import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
+import org.apache.hadoop.ozone.protocol.commands.DeleteContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.ReplicateContainerCommand;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Time;
+import org.apache.ratis.util.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Predicate;
+import java.util.stream.Collectors;
+
+/**
+ * Replication Manager (RM) is the one which is responsible for making sure
+ * that the containers are properly replicated. Replication Manager deals only
+ * with Quasi Closed / Closed container.
+ */
+public class ReplicationManager {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReplicationManager.class);
+
+  /**
+   * Reference to the ContainerManager.
+   */
+  private final ContainerManager containerManager;
+
+  /**
+   * PlacementPolicy which is used to identify where a container
+   * should be copied.
+   */
+  private final ContainerPlacementPolicy containerPlacement;
+
+  /**
+   * EventPublisher to fire Replicate and Delete container commands.
+   */
+  private final EventPublisher eventPublisher;
+
+  /**
+   * Used for locking a container with its ID while processing it.
+   */
+  private final LockManager lockManager;
+
+  /**
+   * This is used to track container replication commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightReplication;
+
+  /**
+   * This is used to track container deletion commands which are issued
+   * by ReplicationManager and not yet complete.
+   */
+  private final Map> inflightDeletion;
+
+  /**
+   * ReplicationMonitor thread is the one which wakes up at configured
+   * interval and processes all the containers.
+   */
+  private final Thread replicationMonitor;
+
+  /**
+   * The frequency in which ReplicationMonitor thread should run.
+   */
+  private final long interval;
+
+  /**
+   * Timeout for container replication & deletion command issued by
+   * ReplicationManager.
+   */
+  private final long eventTimeout;
+
+  /**
+   * Flag used to check if ReplicationMonitor thread is running or not.
+   */
+  private volatile boolean running;
+
+  /**
+   * Constructs ReplicationManager instance with the given configuration.
+   *
+   * @param conf OzoneConfiguration
+   * @param containerManager ContainerManager
+   * @param containerPlacement ContainerPlacementPolicy
+   * @param eventPublisher EventPublisher
+   */
+  public ReplicationManager(final Configuration conf,
+final ContainerManager containerManager,
+

[GitHub] [hadoop] hadoop-yetus commented on issue #623: HDDS-1308. Fix asf license errors.

2019-03-19 Thread GitBox
hadoop-yetus commented on issue #623: HDDS-1308. Fix asf license errors.
URL: https://github.com/apache/hadoop/pull/623#issuecomment-474213941
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 48 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1173 | trunk passed |
   | -1 | compile | 51 | ozone-manager in trunk failed. |
   | +1 | checkstyle | 20 | trunk passed |
   | -1 | mvnsite | 24 | ozone-manager in trunk failed. |
   | +1 | shadedclient | 721 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 23 | ozone-manager in trunk failed. |
   | +1 | javadoc | 21 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 19 | ozone-manager in the patch failed. |
   | -1 | compile | 17 | ozone-manager in the patch failed. |
   | -1 | javac | 17 | ozone-manager in the patch failed. |
   | +1 | checkstyle | 13 | the patch passed |
   | -1 | mvnsite | 19 | ozone-manager in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 773 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 21 | ozone-manager in the patch failed. |
   | +1 | javadoc | 19 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 21 | ozone-manager in the patch failed. |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 3104 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-623/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/623 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 6b1286accc19 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 568d3ab |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-623/1/artifact/out/branch-compile-hadoop-ozone_ozone-manager.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-623/1/artifact/out/branch-mvnsite-hadoop-ozone_ozone-manager.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-623/1/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-623/1/artifact/out/patch-mvninstall-hadoop-ozone_ozone-manager.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-623/1/artifact/out/patch-compile-hadoop-ozone_ozone-manager.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-623/1/artifact/out/patch-compile-hadoop-ozone_ozone-manager.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-623/1/artifact/out/patch-mvnsite-hadoop-ozone_ozone-manager.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-623/1/artifact/out/patch-findbugs-hadoop-ozone_ozone-manager.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-623/1/artifact/out/patch-unit-hadoop-ozone_ozone-manager.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-623/1/testReport/ |
   | Max. process+thread count | 306 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-623/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor ReplicationManager to handle QUASI_CLOSED contain…

2019-03-19 Thread GitBox
nandakumar131 commented on a change in pull request #620: HDDS-1205. Refactor 
ReplicationManager to handle QUASI_CLOSED contain…
URL: https://github.com/apache/hadoop/pull/620#discussion_r266744423
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
 ##
 @@ -0,0 +1,686 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdds.scm.container;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos.LifeCycleState;
+import org.apache.hadoop.hdds.protocol.proto
+.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto.State;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.container.placement.algorithms
+.ContainerPlacementPolicy;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.lock.LockManager;
+import org.apache.hadoop.ozone.protocol.commands.CloseContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.CommandForDatanode;
+import org.apache.hadoop.ozone.protocol.commands.DeleteContainerCommand;
+import org.apache.hadoop.ozone.protocol.commands.ReplicateContainerCommand;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.Time;
+import org.apache.ratis.util.Preconditions;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.LinkedHashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Predicate;
+import java.util.stream.Collectors;
+
+/**
+ * Replication Manager (RM) is the one which is responsible for making sure
+ * that the containers are properly replicated. Replication Manager deals only
+ * with Quasi Closed / Closed container.
+ */
+public class ReplicationManager {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ReplicationManager.class);
+
+  /**
+   * Reference to the ContainerManager.
+   */
+  private final ContainerManager containerManager;
+
+  /**
+   * PlacementPolicy which is used to identify where a container
+   * should be copied.
+   */
+  private final ContainerPlacementPolicy containerPlacement;
+
+  /**
+   * EventPublisher to fire Replicate and Delete container commands.
+   */
+  private final EventPublisher eventPublisher;
+
+  /**
+   * Used for locking a container with its ID while processing it.
+   */
+  private final LockManager lockManager;
 
 Review comment:
   Sure, we can do it.
   The reason for not having the lock embedded inside the container object is 
because the ReplicationManager never changes the state of the container. We get 
the snapshot of container state from ContainerManager and take actions, but we 
never update or modify the container.
   
   Ideally, we don't even need a lock here if we change the logic of 
`processContainersNow` method and just interrupt the thread. I was being a bit 
overcautious and introduced the lock :)
   
   Modification done by report processors to container state will be protected 
by the lock inside ContainerManager.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on issue #601: HDDS-1119. DN get OM certificate from SCM CA for block token validat…

2019-03-19 Thread GitBox
ajayydv commented on issue #601: HDDS-1119. DN get OM certificate from SCM CA 
for block token validat…
URL: https://github.com/apache/hadoop/pull/601#issuecomment-474210860
 
 
   @xiaoyuyao thanks for continuous reviews of this long patch. UT failures 
looks unrelated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv merged pull request #601: HDDS-1119. DN get OM certificate from SCM CA for block token validat…

2019-03-19 Thread GitBox
ajayydv merged pull request #601: HDDS-1119. DN get OM certificate from SCM CA 
for block token validat…
URL: https://github.com/apache/hadoop/pull/601
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org