[GitHub] [hadoop] hadoop-yetus commented on issue #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-05-20 Thread GitBox
hadoop-yetus commented on issue #714: HDDS-1406. Avoid usage of commonPool in 
RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#issuecomment-494242612
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 51 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 89 | Maven dependency ordering for branch |
   | +1 | mvninstall | 702 | trunk passed |
   | +1 | compile | 351 | trunk passed |
   | +1 | checkstyle | 98 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1106 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 175 | trunk passed |
   | 0 | spotbugs | 294 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 489 | trunk passed |
   | -0 | patch | 349 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 481 | the patch passed |
   | +1 | compile | 273 | the patch passed |
   | +1 | javac | 273 | the patch passed |
   | +1 | checkstyle | 84 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 738 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 146 | the patch passed |
   | +1 | findbugs | 499 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 161 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1379 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 65 | The patch does not generate ASF License warnings. |
   | | | 7085 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.hdds.scm.safemode.TestSCMSafeModeWithPipelineRules |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerHandler
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/714 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 9b8dd3bcb2ce 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1cb2eb0 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/13/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/13/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/13/testReport/ |
   | Max. process+thread count | 5418 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/13/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #834: HDDS-1065. OM and DN should persist SCM certificate as the trust root. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
hadoop-yetus commented on issue #834: HDDS-1065. OM and DN should persist SCM 
certificate as the trust root. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/834#issuecomment-494228490
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 61 | Maven dependency ordering for branch |
   | +1 | mvninstall | 571 | trunk passed |
   | +1 | compile | 276 | trunk passed |
   | +1 | checkstyle | 73 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 846 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 136 | trunk passed |
   | 0 | spotbugs | 306 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 507 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 31 | Maven dependency ordering for patch |
   | +1 | mvninstall | 506 | the patch passed |
   | +1 | compile | 278 | the patch passed |
   | +1 | cc | 278 | the patch passed |
   | +1 | javac | 278 | the patch passed |
   | +1 | checkstyle | 77 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 655 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 145 | the patch passed |
   | +1 | findbugs | 531 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 149 | hadoop-hdds in the patch failed. |
   | -1 | unit | 926 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 58 | The patch does not generate ASF License warnings. |
   | | | 6018 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-834/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/834 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux b21d89766c96 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1cb2eb0 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-834/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-834/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-834/4/testReport/ |
   | Max. process+thread count | 5404 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-834/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #834: HDDS-1065. OM and DN should persist SCM certificate as the trust root. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
hadoop-yetus commented on issue #834: HDDS-1065. OM and DN should persist SCM 
certificate as the trust root. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/834#issuecomment-494227265
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | +1 | mvninstall | 533 | trunk passed |
   | +1 | compile | 267 | trunk passed |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 817 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 141 | trunk passed |
   | 0 | spotbugs | 308 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 507 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 484 | the patch passed |
   | +1 | compile | 268 | the patch passed |
   | +1 | cc | 268 | the patch passed |
   | +1 | javac | 268 | the patch passed |
   | +1 | checkstyle | 68 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 608 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 135 | the patch passed |
   | +1 | findbugs | 479 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 144 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1236 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 6051 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-834/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/834 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 9c5dc63198a5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1cb2eb0 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-834/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-834/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-834/3/testReport/ |
   | Max. process+thread count | 4975 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-834/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-05-20 Thread GitBox
bharatviswa504 commented on issue #714: HDDS-1406. Avoid usage of commonPool in 
RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#issuecomment-494224894
 
 
   @lokeshj1703 Addressed the review comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-05-20 Thread GitBox
bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid 
usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r285837951
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
 ##
 @@ -133,7 +173,86 @@ public Pipeline create(ReplicationFactor factor,
 .build();
   }
 
+
+  @Override
+  public void shutdown() {
+forkJoinPool.shutdownNow();
+  }
+
   protected void initializePipeline(Pipeline pipeline) throws IOException {
-RatisPipelineUtils.createPipeline(pipeline, conf);
+createPipeline(pipeline);
+  }
+
+  /**
+   * Sends ratis command to create pipeline on all the datanodes.
+   *
+   * @param pipeline  - Pipeline to be created
+   * @throws IOException if creation fails
+   */
+  public void createPipeline(Pipeline pipeline)
+  throws IOException {
+final RaftGroup group = RatisHelper.newRaftGroup(pipeline);
+LOG.debug("creating pipeline:{} with {}", pipeline.getId(), group);
+callRatisRpc(pipeline.getNodes(),
+(raftClient, peer) -> {
+  RaftClientReply reply = raftClient.groupAdd(group, peer.getId());
+  if (reply == null || !reply.isSuccess()) {
+String msg = "Pipeline initialization failed for pipeline:"
++ pipeline.getId() + " node:" + peer.getId();
+LOG.error(msg);
+throw new IOException(msg);
+  }
+});
+  }
+
+  private void callRatisRpc(List datanodes,
+  CheckedBiConsumer< RaftClient, RaftPeer, IOException> rpc)
+  throws IOException {
+if (datanodes.isEmpty()) {
+  return;
+}
+
+final String rpcType = conf
+.get(ScmConfigKeys.DFS_CONTAINER_RATIS_RPC_TYPE_KEY,
+ScmConfigKeys.DFS_CONTAINER_RATIS_RPC_TYPE_DEFAULT);
+final RetryPolicy retryPolicy = RatisHelper.createRetryPolicy(conf);
+final List< IOException > exceptions =
+Collections.synchronizedList(new ArrayList<>());
+final int maxOutstandingRequests =
+HddsClientUtils.getMaxOutstandingRequests(conf);
+final GrpcTlsConfig tlsConfig = RatisHelper.createTlsClientConfig(new
+SecurityConfig(conf));
+final TimeDuration requestTimeout =
+RatisHelper.getClientRequestTimeout(conf);
+try {
+  forkJoinPool.submit(() -> {
 
 Review comment:
   @lokeshj1703 Sorry missed this comment earlier.
   Checked this, one of the forkJoinPool thread is used for waiting and the 
same is being used in one of the calls for Ratis with 3 pipeline.
   
   **Output:**
   The below line is from after Submit.
   Thread name RATISCREATEPIPELINE1
   `  forkJoinPool.submit(() -> {`
   These below log lines are inside ParallelStream
   `datanodes.parallelStream().forEach(d -> {`
   Internal thread name RATISCREATEPIPELINE1
   Internal thread name RATISCREATEPIPELINE3
   Internal thread name RATISCREATEPIPELINE2
   
   So, I think we should be fine with parallelism set to 3. I even tried with 
4, but I still see the same above output.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-05-20 Thread GitBox
bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid 
usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r285837310
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
 ##
 @@ -133,7 +173,86 @@ public Pipeline create(ReplicationFactor factor,
 .build();
   }
 
+
+  @Override
+  public void shutdown() {
+forkJoinPool.shutdownNow();
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-05-20 Thread GitBox
bharatviswa504 commented on a change in pull request #714: HDDS-1406. Avoid 
usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r285837285
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
 ##
 @@ -24,35 +24,75 @@
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.client.HddsClientUtils;
 import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.ContainerPlacementPolicy;
 import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMContainerPlacementRandom;
 import org.apache.hadoop.hdds.scm.node.NodeManager;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline.PipelineState;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.io.MultipleIOException;
+import org.apache.ratis.RatisHelper;
+import org.apache.ratis.client.RaftClient;
+import org.apache.ratis.grpc.GrpcTlsConfig;
+import org.apache.ratis.protocol.RaftClientReply;
+import org.apache.ratis.protocol.RaftGroup;
+import org.apache.ratis.protocol.RaftPeer;
+import org.apache.ratis.retry.RetryPolicy;
+import org.apache.ratis.rpc.SupportedRpcType;
+import org.apache.ratis.util.TimeDuration;
+import org.apache.ratis.util.function.CheckedBiConsumer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
 import java.lang.reflect.Constructor;
 import java.lang.reflect.InvocationTargetException;
+import java.util.ArrayList;
+import java.util.Collections;
 import java.util.HashSet;
 import java.util.List;
 import java.util.Set;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ForkJoinPool;
+import java.util.concurrent.ForkJoinWorkerThread;
+import java.util.concurrent.RejectedExecutionException;
 import java.util.stream.Collectors;
 
 /**
  * Implements Api for creating ratis pipelines.
  */
 public class RatisPipelineProvider implements PipelineProvider {
 
+  private static final Logger LOG =
+  LoggerFactory.getLogger(RatisPipelineProvider.class);
+
   private final NodeManager nodeManager;
   private final PipelineStateManager stateManager;
   private final Configuration conf;
 
+  // Set parallelism at 3, as now in Ratis we create 1 and 3 node pipelines.
+  private final int parallelisimForPool = 3;
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #837: Adding nodeId to Delimited File

2019-05-20 Thread GitBox
hadoop-yetus commented on issue #837: Adding nodeId to Delimited File
URL: https://github.com/apache/hadoop/pull/837#issuecomment-494213730
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 54 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1120 | trunk passed |
   | +1 | compile | 59 | trunk passed |
   | +1 | checkstyle | 41 | trunk passed |
   | +1 | mvnsite | 66 | trunk passed |
   | +1 | shadedclient | 740 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 48 | trunk passed |
   | 0 | spotbugs | 183 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 181 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 17 | hadoop-hdfs in the patch failed. |
   | -1 | compile | 18 | hadoop-hdfs in the patch failed. |
   | -1 | javac | 18 | hadoop-hdfs in the patch failed. |
   | -0 | checkstyle | 13 | The patch fails to run checkstyle in hadoop-hdfs |
   | -1 | mvnsite | 18 | hadoop-hdfs in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | -1 | shadedclient | 170 | patch has errors when building and testing our 
client artifacts. |
   | -1 | javadoc | 15 | hadoop-hdfs in the patch failed. |
   | -1 | findbugs | 19 | hadoop-hdfs in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 18 | hadoop-hdfs in the patch failed. |
   | +1 | asflicense | 24 | The patch does not generate ASF License warnings. |
   | | | 2679 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-837/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/837 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 9ac6b0925f36 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1cb2eb0 |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-837/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-837/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-837/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-837/1/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-837/out/maven-patch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-837/1/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-837/1/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-837/1/artifact/out/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-837/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-837/1/testReport/ |
   | Max. process+thread count | 413 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-837/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[GitHub] [hadoop] hadoop-yetus commented on issue #828: HDDS-1538. Update ozone protobuf message for ACLs. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
hadoop-yetus commented on issue #828: HDDS-1538. Update ozone protobuf message 
for ACLs. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/828#issuecomment-494205774
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 723 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 112 | Maven dependency ordering for branch |
   | +1 | mvninstall | 754 | trunk passed |
   | +1 | compile | 328 | trunk passed |
   | +1 | checkstyle | 86 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 950 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 130 | trunk passed |
   | 0 | spotbugs | 289 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 473 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 64 | Maven dependency ordering for patch |
   | +1 | mvninstall | 455 | the patch passed |
   | +1 | compile | 287 | the patch passed |
   | +1 | cc | 287 | the patch passed |
   | +1 | javac | 287 | the patch passed |
   | +1 | checkstyle | 93 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 693 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | the patch passed |
   | +1 | findbugs | 520 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 165 | hadoop-hdds in the patch failed. |
   | -1 | unit | 985 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 55 | The patch does not generate ASF License warnings. |
   | | | 7193 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/828 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml cc |
   | uname | Linux 41306946808e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 05db2a5 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/4/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/4/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/4/testReport/ |
   | Max. process+thread count | 4719 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/objectstore-service 
hadoop-ozone/ozone-manager hadoop-ozone/tools U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amommendes opened a new pull request #837: Adding nodeId to Delimited File

2019-05-20 Thread GitBox
amommendes opened a new pull request #837: Adding nodeId to Delimited File
URL: https://github.com/apache/hadoop/pull/837
 
 
   Adding inodeId to delimited file as XML processor.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #828: HDDS-1538. Update ozone protobuf message for ACLs. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
ajayydv commented on a change in pull request #828: HDDS-1538. Update ozone 
protobuf message for ACLs. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/828#discussion_r285706165
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rest/RestClient.java
 ##
 @@ -112,7 +113,7 @@
   private final URI ozoneRestUri;
   private final CloseableHttpClient httpClient;
   private final UserGroupInformation ugi;
-  private final OzoneAcl.OzoneACLRights userRights;
+  private final ACLType userRights;
 
 Review comment:
   change was for compiling purpose only. Commented it now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #830: HDDS-1530. Freon support big files larger than 2GB and add --bufferSize and --validateWrites options.

2019-05-20 Thread GitBox
arp7 commented on a change in pull request #830: HDDS-1530. Freon support big 
files larger than 2GB and add --bufferSize and --validateWrites options.
URL: https://github.com/apache/hadoop/pull/830#discussion_r285780876
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
 ##
 @@ -254,7 +282,7 @@ public Void call() throws Exception {
   writeValidationFailureCount = 0L;
 
   validationQueue =
-  new ArrayBlockingQueue<>(numOfThreads);
+  new LinkedBlockingQueue<>();
 
 Review comment:
   nitpick: don't need a line break here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #830: HDDS-1530. Freon support big files larger than 2GB and add --bufferSize and --validateWrites options.

2019-05-20 Thread GitBox
arp7 commented on a change in pull request #830: HDDS-1530. Freon support big 
files larger than 2GB and add --bufferSize and --validateWrites options.
URL: https://github.com/apache/hadoop/pull/830#discussion_r285780681
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
 ##
 @@ -136,7 +141,20 @@
   description = "Specifies the size of Key in bytes to be created",
   defaultValue = "10240"
   )
-  private int keySize = 10240;
+  private long keySize = 10240;
+
+  @Option(
+  names = "--validateWrites",
+  description = "Specifies whether to validate keys after writtting"
+  )
+  private boolean validateWrites = false;
+
+  @Option(
+  names = "--bufferSize",
+  description = "Specifies the buffer size while writting",
 
 Review comment:
   typo: writting -> writing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #830: HDDS-1530. Freon support big files larger than 2GB and add --bufferSize and --validateWrites options.

2019-05-20 Thread GitBox
arp7 commented on a change in pull request #830: HDDS-1530. Freon support big 
files larger than 2GB and add --bufferSize and --validateWrites options.
URL: https://github.com/apache/hadoop/pull/830#discussion_r285780657
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
 ##
 @@ -136,7 +141,20 @@
   description = "Specifies the size of Key in bytes to be created",
   defaultValue = "10240"
   )
-  private int keySize = 10240;
+  private long keySize = 10240;
+
+  @Option(
+  names = "--validateWrites",
+  description = "Specifies whether to validate keys after writtting"
 
 Review comment:
   typo: writtting -> writing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on a change in pull request #830: HDDS-1530. Freon support big files larger than 2GB and add --bufferSize and --validateWrites options.

2019-05-20 Thread GitBox
arp7 commented on a change in pull request #830: HDDS-1530. Freon support big 
files larger than 2GB and add --bufferSize and --validateWrites options.
URL: https://github.com/apache/hadoop/pull/830#discussion_r285780568
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java
 ##
 @@ -228,8 +243,20 @@ public Void call() throws Exception {
   init(freon.createOzoneConfiguration());
 }
 
-keyValue =
-DFSUtil.string2Bytes(RandomStringUtils.randomAscii(keySize - 36));
+keyValueBuffer = DFSUtil.string2Bytes(
+RandomStringUtils.randomAscii(bufferSize));
 
 Review comment:
   Let's just write zeroes by default. Random data generation may itself become 
a bottleneck. We could add. a separate option for random data later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #831: HDDS-1487. Bootstrap React framework for Recon UI

2019-05-20 Thread GitBox
hadoop-yetus commented on issue #831: HDDS-1487. Bootstrap React framework for 
Recon UI
URL: https://github.com/apache/hadoop/pull/831#issuecomment-494160899
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | +1 | mvninstall | 383 | trunk passed |
   | +1 | compile | 206 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1021 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 117 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for patch |
   | +1 | mvninstall | 521 | the patch passed |
   | -1 | jshint | 2530 | The patch generated 16 new + 0 unchanged - 7061 fixed 
= 16 total (was 7061) |
   | +1 | compile | 253 | the patch passed |
   | +1 | javac | 253 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 616 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 140 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 136 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1108 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 8947 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestScmSafeMode |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-831/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/831 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml jshint |
   | uname | Linux 2b365b517b45 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 24c53e0 |
   | Default Java | 1.8.0_212 |
   | jshint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-831/3/artifact/out/diff-patch-jshint.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-831/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-831/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-831/3/testReport/ |
   | Max. process+thread count | 4533 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone hadoop-ozone/ozone-recon U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-831/3/console |
   | versions | git=2.7.4 maven=3.3.9 jshint=2.10.2 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #835: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-20 Thread GitBox
hadoop-yetus commented on issue #835: HADOOP-15183 S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/835#issuecomment-494158727
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 48 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 27 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1197 | trunk passed |
   | +1 | compile | 1249 | trunk passed |
   | +1 | checkstyle | 143 | trunk passed |
   | +1 | mvnsite | 131 | trunk passed |
   | +1 | shadedclient | 1020 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 95 | trunk passed |
   | 0 | spotbugs | 70 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 196 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 93 | the patch passed |
   | +1 | compile | 1508 | the patch passed |
   | +1 | javac | 1508 | the patch passed |
   | -0 | checkstyle | 159 | root: The patch generated 17 new + 96 unchanged - 
2 fixed = 113 total (was 98) |
   | +1 | mvnsite | 149 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 716 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 95 | the patch passed |
   | +1 | findbugs | 197 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 554 | hadoop-common in the patch passed. |
   | +1 | unit | 281 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 7939 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-835/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/835 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 8bc8ebeb7249 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 24c53e0 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-835/1/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-835/1/testReport/ |
   | Max. process+thread count | 1345 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-835/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #828: HDDS-1538. Update ozone protobuf message for ACLs. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
xiaoyuyao commented on a change in pull request #828: HDDS-1538. Update ozone 
protobuf message for ACLs. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/828#discussion_r285775796
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/protocolPB/OMPBHelper.java
 ##
 @@ -98,33 +102,28 @@ public static OzoneAclInfo convertOzoneAcl(OzoneAcl acl) {
* @return OzoneAcl
*/
   public static OzoneAcl convertOzoneAcl(OzoneAclInfo aclInfo) {
-OzoneAcl.OzoneACLType aclType;
+ACLIdentityType aclType;
 switch(aclInfo.getType()) {
 case USER:
-  aclType = OzoneAcl.OzoneACLType.USER;
+  aclType = ACLIdentityType.USER;
   break;
 case GROUP:
-  aclType = OzoneAcl.OzoneACLType.GROUP;
+  aclType = ACLIdentityType.GROUP;
   break;
 case WORLD:
-  aclType = OzoneAcl.OzoneACLType.WORLD;
+  aclType = ACLIdentityType.WORLD;
   break;
 default:
   throw new IllegalArgumentException("ACL type is not recognized");
 }
-OzoneAcl.OzoneACLRights aclRights;
-switch(aclInfo.getRights()) {
-case READ:
-  aclRights = OzoneAcl.OzoneACLRights.READ;
-  break;
-case WRITE:
-  aclRights = OzoneAcl.OzoneACLRights.WRITE;
-  break;
-case READ_WRITE:
-  aclRights = OzoneAcl.OzoneACLRights.READ_WRITE;
-  break;
-default:
-  throw new IllegalArgumentException("ACL right is not recognized");
+
+List aclRights = new ArrayList<>();
+for(OzoneAclRights acl:aclInfo.getRightsList()) {
+  try {
+aclRights.add(ACLType.valueOf(acl.name()));
+  } catch(IllegalArgumentException iae) {
 
 Review comment:
   remove the try catch. Same as above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #828: HDDS-1538. Update ozone protobuf message for ACLs. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
xiaoyuyao commented on a change in pull request #828: HDDS-1538. Update ozone 
protobuf message for ACLs. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/828#discussion_r285775612
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/protocolPB/OMPBHelper.java
 ##
 @@ -98,33 +102,28 @@ public static OzoneAclInfo convertOzoneAcl(OzoneAcl acl) {
* @return OzoneAcl
*/
   public static OzoneAcl convertOzoneAcl(OzoneAclInfo aclInfo) {
-OzoneAcl.OzoneACLType aclType;
+ACLIdentityType aclType;
 switch(aclInfo.getType()) {
 case USER:
-  aclType = OzoneAcl.OzoneACLType.USER;
+  aclType = ACLIdentityType.USER;
   break;
 case GROUP:
-  aclType = OzoneAcl.OzoneACLType.GROUP;
+  aclType = ACLIdentityType.GROUP;
   break;
 case WORLD:
-  aclType = OzoneAcl.OzoneACLType.WORLD;
+  aclType = ACLIdentityType.WORLD;
 
 Review comment:
   Do we missing some enums here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #828: HDDS-1538. Update ozone protobuf message for ACLs. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
xiaoyuyao commented on a change in pull request #828: HDDS-1538. Update ozone 
protobuf message for ACLs. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/828#discussion_r285775332
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/protocolPB/OMPBHelper.java
 ##
 @@ -72,24 +81,19 @@ public static OzoneAclInfo convertOzoneAcl(OzoneAcl acl) {
 default:
   throw new IllegalArgumentException("ACL type is not recognized");
 }
-OzoneAclInfo.OzoneAclRights aclRights;
-switch(acl.getRights()) {
-case READ:
-  aclRights = OzoneAclRights.READ;
-  break;
-case WRITE:
-  aclRights = OzoneAclRights.WRITE;
-  break;
-case READ_WRITE:
-  aclRights = OzoneAclRights.READ_WRITE;
-  break;
-default:
-  throw new IllegalArgumentException("ACL right is not recognized");
+List aclRights = new ArrayList<>();
+
+for(ACLType right: acl.getRights()) {
+  try {
+aclRights.add(OzoneAclRights.valueOf(right.name()));
+  } catch (IllegalArgumentException iae) {
+LOG.error("ACL:{} right is not recognized.", acl);
 
 Review comment:
   Agree with @anuengineer based on offline discussion with @ajayydv . We 
should stop here instead of catch and let the parsing continue. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #828: HDDS-1538. Update ozone protobuf message for ACLs. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
xiaoyuyao commented on a change in pull request #828: HDDS-1538. Update ozone 
protobuf message for ACLs. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/828#discussion_r285773115
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -47,16 +52,37 @@ public OzoneAcl() {
*
* @param type - Type
* @param name - Name of user
-   * @param rights - Rights
+   * @param acl - Rights
*/
-  public OzoneAcl(OzoneACLType type, String name, OzoneACLRights rights) {
+  public OzoneAcl(ACLIdentityType type, String name, ACLType acl) {
 this.name = name;
-this.rights = rights;
+this.rights = new ArrayList<>();
+this.rights.add(acl);
 this.type = type;
-if (type == OzoneACLType.WORLD && name.length() != 0) {
+if (type == ACLIdentityType.WORLD && name.length() != 0) {
   throw new IllegalArgumentException("Unexpected name part in world type");
 }
-if (((type == OzoneACLType.USER) || (type == OzoneACLType.GROUP))
+if (((type == ACLIdentityType.USER) || (type == ACLIdentityType.GROUP))
+&& (name.length() == 0)) {
+  throw new IllegalArgumentException("User or group name is required");
+}
+  }
+
+  /**
+   * Constructor for OzoneAcl.
+   *
+   * @param type - Type
+   * @param name - Name of user
 
 Review comment:
   Let's do the refactor in a separate JIRA.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #828: HDDS-1538. Update ozone protobuf message for ACLs. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
hadoop-yetus commented on issue #828: HDDS-1538. Update ozone protobuf message 
for ACLs. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/828#issuecomment-494152503
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 73 | Maven dependency ordering for branch |
   | +1 | mvninstall | 413 | trunk passed |
   | +1 | compile | 196 | trunk passed |
   | +1 | checkstyle | 51 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 884 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 120 | trunk passed |
   | 0 | spotbugs | 239 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 422 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 403 | the patch passed |
   | +1 | compile | 202 | the patch passed |
   | +1 | cc | 202 | the patch passed |
   | +1 | javac | 202 | the patch passed |
   | +1 | checkstyle | 55 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 703 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 120 | the patch passed |
   | +1 | findbugs | 436 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 151 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1299 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 5739 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.web.client.TestKeysRatis |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.scm.TestGetCommittedBlockLengthAndPutKey |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.TestOzoneConfigurationFields |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/828 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml cc |
   | uname | Linux e79a0952b6e1 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 24c53e0 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/3/testReport/ |
   | Max. process+thread count | 5245 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/objectstore-service 
hadoop-ozone/ozone-manager hadoop-ozone/tools U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ben-roling commented on issue #836: HADOOP-16319 skip invalid tests when default encryption enabled

2019-05-20 Thread GitBox
ben-roling commented on issue #836: HADOOP-16319 skip invalid tests when 
default encryption enabled
URL: https://github.com/apache/hadoop/pull/836#issuecomment-494151693
 
 
   With regard to testing: I've only run this one test class before and after 
the change.  It is successfully skipping the tests that were previously failing 
when default encryption was enabled on the bucket.  I did not run the full 
suite since there were no changes that could affect other test classes.
   
   My tests were performed against us-west-2.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #828: HDDS-1538. Update ozone protobuf message for ACLs. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
hadoop-yetus commented on issue #828: HDDS-1538. Update ozone protobuf message 
for ACLs. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/828#issuecomment-494149939
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 84 | Maven dependency ordering for branch |
   | +1 | mvninstall | 477 | trunk passed |
   | +1 | compile | 231 | trunk passed |
   | +1 | checkstyle | 57 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 856 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 144 | trunk passed |
   | 0 | spotbugs | 259 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 481 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | +1 | mvninstall | 448 | the patch passed |
   | +1 | compile | 220 | the patch passed |
   | +1 | cc | 220 | the patch passed |
   | +1 | javac | 220 | the patch passed |
   | +1 | checkstyle | 57 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 681 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 135 | the patch passed |
   | +1 | findbugs | 553 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 186 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1386 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 6248 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/828 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml cc |
   | uname | Linux d15937724ed6 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 24c53e0 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/2/testReport/ |
   | Max. process+thread count | 5314 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/integration-test hadoop-ozone/objectstore-service 
hadoop-ozone/ozone-manager hadoop-ozone/tools U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-828/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #834: HDDS-1065. OM and DN should persist SCM certificate as the trust root. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
xiaoyuyao commented on a change in pull request #834: HDDS-1065. OM and DN 
should persist SCM certificate as the trust root. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/834#discussion_r285765066
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/x509/certificate/client/CertificateClient.java
 ##
 @@ -135,10 +135,11 @@ boolean verifySignature(byte[] data, byte[] signature,
*
* @param pemEncodedCert- pem encoded X509 Certificate
* @param force - override any existing file
+   * @param caCert- Is CA certificate.
* @throws CertificateException - on Error.
*
*/
-  void storeCertificate(String pemEncodedCert, boolean force)
+  void storeCertificate(String pemEncodedCert, boolean force, boolean caCert)
 
 Review comment:
   Adding a new interface method storeCACertificate() can avoid many unrelated 
changes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #836: HADOOP-16319 skip invalid tests when default encryption enabled

2019-05-20 Thread GitBox
hadoop-yetus commented on issue #836: HADOOP-16319 skip invalid tests when 
default encryption enabled
URL: https://github.com/apache/hadoop/pull/836#issuecomment-494144822
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 507 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1067 | trunk passed |
   | +1 | compile | 34 | trunk passed |
   | +1 | checkstyle | 24 | trunk passed |
   | +1 | mvnsite | 39 | trunk passed |
   | +1 | shadedclient | 692 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 27 | trunk passed |
   | 0 | spotbugs | 59 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 58 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 31 | the patch passed |
   | +1 | compile | 31 | the patch passed |
   | +1 | javac | 31 | the patch passed |
   | +1 | checkstyle | 17 | the patch passed |
   | +1 | mvnsite | 33 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 752 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 20 | the patch passed |
   | +1 | findbugs | 61 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 266 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 3782 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-836/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/836 |
   | Optional Tests | dupname asflicense mvnsite compile javac javadoc 
mvninstall unit shadedclient findbugs checkstyle |
   | uname | Linux 07b98714516a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 24c53e0 |
   | Default Java | 1.8.0_212 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-836/1/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-836/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer merged pull request #831: HDDS-1487. Bootstrap React framework for Recon UI

2019-05-20 Thread GitBox
anuengineer merged pull request #831: HDDS-1487. Bootstrap React framework for 
Recon UI
URL: https://github.com/apache/hadoop/pull/831
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #831: HDDS-1487. Bootstrap React framework for Recon UI

2019-05-20 Thread GitBox
anuengineer commented on issue #831: HDDS-1487. Bootstrap React framework for 
Recon UI
URL: https://github.com/apache/hadoop/pull/831#issuecomment-494132578
 
 
   @jiwq  Thanks for the review. @vivekratnavel  Thanks for the contribution, I 
will commit this to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer merged pull request #799: HDDS-1451 : SCMBlockManager findPipeline and createPipeline are not lock protected.

2019-05-20 Thread GitBox
anuengineer merged pull request #799: HDDS-1451 : SCMBlockManager findPipeline 
and createPipeline are not lock protected.
URL: https://github.com/apache/hadoop/pull/799
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #799: HDDS-1451 : SCMBlockManager findPipeline and createPipeline are not lock protected.

2019-05-20 Thread GitBox
anuengineer commented on issue #799: HDDS-1451 : SCMBlockManager findPipeline 
and createPipeline are not lock protected.
URL: https://github.com/apache/hadoop/pull/799#issuecomment-494130302
 
 
   +1, LGTM. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ben-roling opened a new pull request #836: HADOOP-16319 skip invalid tests when default encryption enabled

2019-05-20 Thread GitBox
ben-roling opened a new pull request #836: HADOOP-16319 skip invalid tests when 
default encryption enabled
URL: https://github.com/apache/hadoop/pull/836
 
 
   ETag values are unpredictable with some encryption algorithms. Skip tests 
asserting ETags when default encryption is enabled.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16322) FileNotFoundException for checksum file from hadoop-maven-plugins

2019-05-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844234#comment-16844234
 ] 

Hadoop QA commented on HADOOP-16322:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 11m 
39s{color} | {color:red} Docker failed to build yetus/hadoop:749e106. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16322 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969194/HADOOP-16322-branch-2.8.5.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16259/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> FileNotFoundException for checksum file from hadoop-maven-plugins
> -
>
> Key: HADOOP-16322
> URL: https://issues.apache.org/jira/browse/HADOOP-16322
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.5
>Reporter: Dongwook Kwon
>Priority: Minor
>  Labels: easyfix
> Fix For: 2.8.5
>
> Attachments: HADOOP-16322-branch-2.8.5.001.patch, HADOOP-16322.patch
>
>
> I found hadoop-maven-plugins has an issue with checksum file creation which 
> was updated by https://issues.apache.org/jira/browse/HADOOP-12194 
> Since 
> [checksumFile|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L63]
>  is expected to be 
> "${project.build.directory}/hadoop-maven-plugins-protoc-checksums.json", when 
> ${project.build.directory} doesn't exist yet, writing [checksum 
> file|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L167],
>  throws exception.
> such as the following from HBase which rely on Hadoop-maven-plugins to 
> generate Protoc
>  
> {{[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
> hbase: failed to get report for 
> org.apache.maven.plugins:maven-javadoc-plugin: Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:2.8.5:protoc (compile-protoc) on 
> project hbase-examples: java.io.FileNotFoundException: 
> /Users/dongwook/devrepo/apache-git/hbase/hbase-examples/target/hadoop-maven-plugins-protoc-checksums.json
>  (No such file or directory) -> [Help 1]}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #835: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-20 Thread GitBox
steveloughran opened a new pull request #835: HADOOP-15183 S3Guard store 
becomes inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/835
 
 
   This addresses
   
   * HADOOP-15183: S3Guard store becomes inconsistent after partial failure of 
rename
   * HADOOP-13936 S3Guard: DynamoDB can go out of sync with 
S3AFileSystem::delete operation
   * HADOOP-15604 Bulk commits of S3A MPUs place needless excessive load on S3 
& S3Guard
   
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16322) FileNotFoundException for checksum file from hadoop-maven-plugins

2019-05-20 Thread Dongwook Kwon (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongwook Kwon updated HADOOP-16322:
---
Status: Open  (was: Patch Available)

> FileNotFoundException for checksum file from hadoop-maven-plugins
> -
>
> Key: HADOOP-16322
> URL: https://issues.apache.org/jira/browse/HADOOP-16322
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.5
>Reporter: Dongwook Kwon
>Priority: Minor
>  Labels: easyfix
> Fix For: 2.8.5
>
> Attachments: HADOOP-16322-branch-2.8.5.001.patch, HADOOP-16322.patch
>
>
> I found hadoop-maven-plugins has an issue with checksum file creation which 
> was updated by https://issues.apache.org/jira/browse/HADOOP-12194 
> Since 
> [checksumFile|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L63]
>  is expected to be 
> "${project.build.directory}/hadoop-maven-plugins-protoc-checksums.json", when 
> ${project.build.directory} doesn't exist yet, writing [checksum 
> file|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L167],
>  throws exception.
> such as the following from HBase which rely on Hadoop-maven-plugins to 
> generate Protoc
>  
> {{[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
> hbase: failed to get report for 
> org.apache.maven.plugins:maven-javadoc-plugin: Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:2.8.5:protoc (compile-protoc) on 
> project hbase-examples: java.io.FileNotFoundException: 
> /Users/dongwook/devrepo/apache-git/hbase/hbase-examples/target/hadoop-maven-plugins-protoc-checksums.json
>  (No such file or directory) -> [Help 1]}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16322) FileNotFoundException for checksum file from hadoop-maven-plugins

2019-05-20 Thread Dongwook Kwon (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongwook Kwon updated HADOOP-16322:
---
Attachment: HADOOP-16322-branch-2.8.5.001.patch
Status: Patch Available  (was: Open)

> FileNotFoundException for checksum file from hadoop-maven-plugins
> -
>
> Key: HADOOP-16322
> URL: https://issues.apache.org/jira/browse/HADOOP-16322
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.5
>Reporter: Dongwook Kwon
>Priority: Minor
>  Labels: easyfix
> Fix For: 2.8.5
>
> Attachments: HADOOP-16322-branch-2.8.5.001.patch, HADOOP-16322.patch
>
>
> I found hadoop-maven-plugins has an issue with checksum file creation which 
> was updated by https://issues.apache.org/jira/browse/HADOOP-12194 
> Since 
> [checksumFile|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L63]
>  is expected to be 
> "${project.build.directory}/hadoop-maven-plugins-protoc-checksums.json", when 
> ${project.build.directory} doesn't exist yet, writing [checksum 
> file|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L167],
>  throws exception.
> such as the following from HBase which rely on Hadoop-maven-plugins to 
> generate Protoc
>  
> {{[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
> hbase: failed to get report for 
> org.apache.maven.plugins:maven-javadoc-plugin: Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:2.8.5:protoc (compile-protoc) on 
> project hbase-examples: java.io.FileNotFoundException: 
> /Users/dongwook/devrepo/apache-git/hbase/hbase-examples/target/hadoop-maven-plugins-protoc-checksums.json
>  (No such file or directory) -> [Help 1]}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16322) FileNotFoundException for checksum file from hadoop-maven-plugins

2019-05-20 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844222#comment-16844222
 ] 

Hadoop QA commented on HADOOP-16322:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-16322 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16322 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12969193/HADOOP-16322.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16258/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> FileNotFoundException for checksum file from hadoop-maven-plugins
> -
>
> Key: HADOOP-16322
> URL: https://issues.apache.org/jira/browse/HADOOP-16322
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.5
>Reporter: Dongwook Kwon
>Priority: Minor
>  Labels: easyfix
> Fix For: 2.8.5
>
> Attachments: HADOOP-16322.patch
>
>
> I found hadoop-maven-plugins has an issue with checksum file creation which 
> was updated by https://issues.apache.org/jira/browse/HADOOP-12194 
> Since 
> [checksumFile|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L63]
>  is expected to be 
> "${project.build.directory}/hadoop-maven-plugins-protoc-checksums.json", when 
> ${project.build.directory} doesn't exist yet, writing [checksum 
> file|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L167],
>  throws exception.
> such as the following from HBase which rely on Hadoop-maven-plugins to 
> generate Protoc
>  
> {{[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
> hbase: failed to get report for 
> org.apache.maven.plugins:maven-javadoc-plugin: Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:2.8.5:protoc (compile-protoc) on 
> project hbase-examples: java.io.FileNotFoundException: 
> /Users/dongwook/devrepo/apache-git/hbase/hbase-examples/target/hadoop-maven-plugins-protoc-checksums.json
>  (No such file or directory) -> [Help 1]}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16322) FileNotFoundException for checksum file from hadoop-maven-plugins

2019-05-20 Thread Dongwook Kwon (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongwook Kwon updated HADOOP-16322:
---
Attachment: HADOOP-16322.patch
Status: Patch Available  (was: Open)

> FileNotFoundException for checksum file from hadoop-maven-plugins
> -
>
> Key: HADOOP-16322
> URL: https://issues.apache.org/jira/browse/HADOOP-16322
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.5
>Reporter: Dongwook Kwon
>Priority: Minor
>  Labels: easyfix
> Fix For: 2.8.5
>
> Attachments: HADOOP-16322.patch
>
>
> I found hadoop-maven-plugins has an issue with checksum file creation which 
> was updated by https://issues.apache.org/jira/browse/HADOOP-12194 
> Since 
> [checksumFile|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L63]
>  is expected to be 
> "${project.build.directory}/hadoop-maven-plugins-protoc-checksums.json", when 
> ${project.build.directory} doesn't exist yet, writing [checksum 
> file|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L167],
>  throws exception.
> such as the following from HBase which rely on Hadoop-maven-plugins to 
> generate Protoc
>  
> {{[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
> hbase: failed to get report for 
> org.apache.maven.plugins:maven-javadoc-plugin: Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:2.8.5:protoc (compile-protoc) on 
> project hbase-examples: java.io.FileNotFoundException: 
> /Users/dongwook/devrepo/apache-git/hbase/hbase-examples/target/hadoop-maven-plugins-protoc-checksums.json
>  (No such file or directory) -> [Help 1]}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16322) FileNotFoundException for checksum file from hadoop-maven-plugins

2019-05-20 Thread Dongwook Kwon (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongwook Kwon updated HADOOP-16322:
---
Description: 
I found hadoop-maven-plugins has an issue with checksum file creation which was 
updated by https://issues.apache.org/jira/browse/HADOOP-12194 

Since 
[checksumFile|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L63]
 is expected to be 
"${project.build.directory}/hadoop-maven-plugins-protoc-checksums.json", when 
${project.build.directory} doesn't exist yet, writing [checksum 
file|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L167],
 throws exception.

such as the following from HBase which rely on Hadoop-maven-plugins to generate 
Protoc

 

{{[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
hbase: failed to get report for org.apache.maven.plugins:maven-javadoc-plugin: 
Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.8.5:protoc 
(compile-protoc) on project hbase-examples: java.io.FileNotFoundException: 
/Users/dongwook/devrepo/apache-git/hbase/hbase-examples/target/hadoop-maven-plugins-protoc-checksums.json
 (No such file or directory) -> [Help 1]}}

  was:
I found hadoop-maven-plugins has an issue with checksum file creation which was 
updated by https://issues.apache.org/jira/browse/HADOOP-12194 

Since 
[checksumFile|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L63]
 is expected to be 
"${project.build.directory}/hadoop-maven-plugins-protoc-checksums.json", when 
${project.build.directory} doesn't exist yet, writing [checksum 
file|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L167],
 throws exception.

such as the following from HBase which rely on Hadoop-maven-plugins to generate 
Protoc

 

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
hbase: failed to get report for org.apache.maven.plugins:maven-javadoc-plugin: 
Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.8.5:protoc 
(compile-protoc) on project hbase-examples: java.io.FileNotFoundException: 
/Users/dongwook/devrepo/apache-git/hbase/hbase-examples/target/hadoop-maven-plugins-protoc-checksums.json
 (No such file or directory) -> [Help 1]


> FileNotFoundException for checksum file from hadoop-maven-plugins
> -
>
> Key: HADOOP-16322
> URL: https://issues.apache.org/jira/browse/HADOOP-16322
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.5
>Reporter: Dongwook Kwon
>Priority: Minor
>  Labels: easyfix
> Fix For: 2.8.5
>
>
> I found hadoop-maven-plugins has an issue with checksum file creation which 
> was updated by https://issues.apache.org/jira/browse/HADOOP-12194 
> Since 
> [checksumFile|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L63]
>  is expected to be 
> "${project.build.directory}/hadoop-maven-plugins-protoc-checksums.json", when 
> ${project.build.directory} doesn't exist yet, writing [checksum 
> file|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L167],
>  throws exception.
> such as the following from HBase which rely on Hadoop-maven-plugins to 
> generate Protoc
>  
> {{[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
> hbase: failed to get report for 
> org.apache.maven.plugins:maven-javadoc-plugin: Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:2.8.5:protoc (compile-protoc) on 
> project hbase-examples: java.io.FileNotFoundException: 
> /Users/dongwook/devrepo/apache-git/hbase/hbase-examples/target/hadoop-maven-plugins-protoc-checksums.json
>  (No such file or directory) -> [Help 1]}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16322) FileNotFoundException for checksum file from hadoop-maven-plugins

2019-05-20 Thread Dongwook Kwon (JIRA)
Dongwook Kwon created HADOOP-16322:
--

 Summary: FileNotFoundException for checksum file from 
hadoop-maven-plugins
 Key: HADOOP-16322
 URL: https://issues.apache.org/jira/browse/HADOOP-16322
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.8.5
Reporter: Dongwook Kwon
 Fix For: 2.8.5


I found hadoop-maven-plugins has an issue with checksum file creation which was 
updated by https://issues.apache.org/jira/browse/HADOOP-12194 

Since 
[checksumFile|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L63]
 is expected to be 
"${project.build.directory}/hadoop-maven-plugins-protoc-checksums.json", when 
${project.build.directory} doesn't exist yet, writing [checksum 
file|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L167],
 throws exception.

such as the following from HBase which rely on Hadoop-maven-plugins to generate 
Protoc

 

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
hbase: failed to get report for org.apache.maven.plugins:maven-javadoc-plugin: 
Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.8.5:protoc 
(compile-protoc) on project hbase-examples: java.io.FileNotFoundException: 
/Users/dongwook/devrepo/apache-git/hbase/hbase-examples/target/hadoop-maven-plugins-protoc-checksums.json
 (No such file or directory) -> [Help 1]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-20 Thread GitBox
steveloughran commented on issue #654: HADOOP-15183 S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#issuecomment-494106274
 
 
   Closing this PR and kicking off a new one


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-20 Thread GitBox
steveloughran closed pull request #654: HADOOP-15183 S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #834: HDDS-1065. OM and DN should persist SCM certificate as the trust root. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
hadoop-yetus commented on issue #834: HDDS-1065. OM and DN should persist SCM 
certificate as the trust root. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/834#issuecomment-494093153
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 62 | Maven dependency ordering for branch |
   | +1 | mvninstall | 395 | trunk passed |
   | +1 | compile | 202 | trunk passed |
   | +1 | checkstyle | 53 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 823 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 122 | trunk passed |
   | 0 | spotbugs | 235 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 422 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 377 | the patch passed |
   | +1 | compile | 195 | the patch passed |
   | +1 | javac | 195 | the patch passed |
   | -0 | checkstyle | 23 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 604 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 116 | the patch passed |
   | +1 | findbugs | 422 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 137 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1239 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 5446 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-834/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/834 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 03cf50d376b3 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 24c53e0 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-834/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-834/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-834/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-834/1/testReport/ |
   | Max. process+thread count | 4278 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-834/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #828: HDDS-1538. Update ozone protobuf message for ACLs. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
ajayydv commented on a change in pull request #828: HDDS-1538. Update ozone 
protobuf message for ACLs. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/828#discussion_r285706165
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rest/RestClient.java
 ##
 @@ -112,7 +113,7 @@
   private final URI ozoneRestUri;
   private final CloseableHttpClient httpClient;
   private final UserGroupInformation ugi;
-  private final OzoneAcl.OzoneACLRights userRights;
+  private final ACLType userRights;
 
 Review comment:
   change is for compiling purpose only. We can remove REST client.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #828: HDDS-1538. Update ozone protobuf message for ACLs. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
ajayydv commented on a change in pull request #828: HDDS-1538. Update ozone 
protobuf message for ACLs. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/828#discussion_r285703674
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/protocolPB/OMPBHelper.java
 ##
 @@ -72,24 +81,19 @@ public static OzoneAclInfo convertOzoneAcl(OzoneAcl acl) {
 default:
   throw new IllegalArgumentException("ACL type is not recognized");
 }
-OzoneAclInfo.OzoneAclRights aclRights;
-switch(acl.getRights()) {
-case READ:
-  aclRights = OzoneAclRights.READ;
-  break;
-case WRITE:
-  aclRights = OzoneAclRights.WRITE;
-  break;
-case READ_WRITE:
-  aclRights = OzoneAclRights.READ_WRITE;
-  break;
-default:
-  throw new IllegalArgumentException("ACL right is not recognized");
+List aclRights = new ArrayList<>();
+
+for(ACLType right: acl.getRights()) {
+  try {
+aclRights.add(OzoneAclRights.valueOf(right.name()));
+  } catch (IllegalArgumentException iae) {
+LOG.error("ACL:{} right is not recognized.", acl);
 
 Review comment:
   Since this involves a list of acls IMO a single bad acl type should not halt 
the usage of remaining ones. But i am open to change if you think we should 
throw error at this point. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #828: HDDS-1538. Update ozone protobuf message for ACLs. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
ajayydv commented on a change in pull request #828: HDDS-1538. Update ozone 
protobuf message for ACLs. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/828#discussion_r285702158
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -256,13 +258,13 @@ public void createVolume(String volumeName, VolumeArgs 
volArgs)
 OzoneQuota.parseQuota(volArgs.getQuota()).sizeInBytes();
 List listOfAcls = new ArrayList<>();
 //User ACL
-listOfAcls.add(new OzoneAcl(OzoneAcl.OzoneACLType.USER,
+listOfAcls.add(new OzoneAcl(ACLIdentityType.USER,
 
 Review comment:
   We can address this by adding ANONYMOUS type in ACLIdentityType.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #828: HDDS-1538. Update ozone protobuf message for ACLs. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
ajayydv commented on a change in pull request #828: HDDS-1538. Update ozone 
protobuf message for ACLs. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/828#discussion_r285699438
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -47,16 +52,37 @@ public OzoneAcl() {
*
* @param type - Type
* @param name - Name of user
-   * @param rights - Rights
+   * @param acl - Rights
*/
-  public OzoneAcl(OzoneACLType type, String name, OzoneACLRights rights) {
+  public OzoneAcl(ACLIdentityType type, String name, ACLType acl) {
 this.name = name;
-this.rights = rights;
+this.rights = new ArrayList<>();
+this.rights.add(acl);
 this.type = type;
-if (type == OzoneACLType.WORLD && name.length() != 0) {
+if (type == ACLIdentityType.WORLD && name.length() != 0) {
   throw new IllegalArgumentException("Unexpected name part in world type");
 }
-if (((type == OzoneACLType.USER) || (type == OzoneACLType.GROUP))
+if (((type == ACLIdentityType.USER) || (type == ACLIdentityType.GROUP))
+&& (name.length() == 0)) {
+  throw new IllegalArgumentException("User or group name is required");
+}
+  }
+
+  /**
+   * Constructor for OzoneAcl.
+   *
+   * @param type - Type
+   * @param name - Name of user
+   * @param acls - Rights
+   */
+  public OzoneAcl(ACLIdentityType type, String name, List acls) {
+this.name = name;
+this.rights = acls;
+this.type = type;
+if (type == ACLIdentityType.WORLD && name.length() != 0) {
 
 Review comment:
   Logic is taken from existing constructor. I think intention is to have a 
name when type if GROUP and USER but not when it is WORLD.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #828: HDDS-1538. Update ozone protobuf message for ACLs. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
ajayydv commented on a change in pull request #828: HDDS-1538. Update ozone 
protobuf message for ACLs. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/828#discussion_r285697208
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -47,16 +52,37 @@ public OzoneAcl() {
*
* @param type - Type
* @param name - Name of user
-   * @param rights - Rights
+   * @param acl - Rights
*/
-  public OzoneAcl(OzoneACLType type, String name, OzoneACLRights rights) {
+  public OzoneAcl(ACLIdentityType type, String name, ACLType acl) {
 this.name = name;
-this.rights = rights;
+this.rights = new ArrayList<>();
+this.rights.add(acl);
 this.type = type;
-if (type == OzoneACLType.WORLD && name.length() != 0) {
+if (type == ACLIdentityType.WORLD && name.length() != 0) {
   throw new IllegalArgumentException("Unexpected name part in world type");
 }
-if (((type == OzoneACLType.USER) || (type == OzoneACLType.GROUP))
+if (((type == ACLIdentityType.USER) || (type == ACLIdentityType.GROUP))
+&& (name.length() == 0)) {
+  throw new IllegalArgumentException("User or group name is required");
+}
+  }
+
+  /**
+   * Constructor for OzoneAcl.
+   *
+   * @param type - Type
+   * @param name - Name of user
 
 Review comment:
   Shall we handle it in separate jira? We can utilize RequestContext wrapper 
to refactor this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16320) Workaround bug with commons-configuration to be able to emit Ganglia metrics to multiple sink servers

2019-05-20 Thread Thomas Poepping (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844148#comment-16844148
 ] 

Thomas Poepping commented on HADOOP-16320:
--

Great, thanks Steve, I'll get to work on that.

> Workaround bug with commons-configuration to be able to emit Ganglia metrics 
> to multiple sink servers
> -
>
> Key: HADOOP-16320
> URL: https://issues.apache.org/jira/browse/HADOOP-16320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.9.2, 2.9.3
>Reporter: Thomas Poepping
>Assignee: Thomas Poepping
>Priority: Minor
> Attachments: HADOOP-16320.patch
>
>
> AbstractGangliaSink is used by the hadoop-metrics2 package to emit metrics to 
> Ganglia. Currently, this class uses the apache commons-configuration package 
> to read from the hadoop-metrics2.properties file. commons-configuration is 
> outdated, and has a bug where the .getString function drops everything after 
> the first comma. This change uses .getList instead, which will work for one 
> or many Ganglia sink servers.
>  
> This is fixed in trunk by upgrading to commons-configuration2, which doesn't 
> have the bug anymore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #828: HDDS-1538. Update ozone protobuf message for ACLs. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
ajayydv commented on a change in pull request #828: HDDS-1538. Update ozone 
protobuf message for ACLs. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/828#discussion_r285695497
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -47,16 +52,37 @@ public OzoneAcl() {
*
* @param type - Type
* @param name - Name of user
-   * @param rights - Rights
+   * @param acl - Rights
*/
-  public OzoneAcl(OzoneACLType type, String name, OzoneACLRights rights) {
+  public OzoneAcl(ACLIdentityType type, String name, ACLType acl) {
 this.name = name;
-this.rights = rights;
+this.rights = new ArrayList<>();
+this.rights.add(acl);
 this.type = type;
-if (type == OzoneACLType.WORLD && name.length() != 0) {
+if (type == ACLIdentityType.WORLD && name.length() != 0) {
   throw new IllegalArgumentException("Unexpected name part in world type");
 }
-if (((type == OzoneACLType.USER) || (type == OzoneACLType.GROUP))
+if (((type == ACLIdentityType.USER) || (type == ACLIdentityType.GROUP))
+&& (name.length() == 0)) {
+  throw new IllegalArgumentException("User or group name is required");
+}
+  }
+
+  /**
+   * Constructor for OzoneAcl.
+   *
+   * @param type - Type
+   * @param name - Name of user
 
 Review comment:
   I am open for this but this will require Ranger team to update there 
authorizer. Shall we handle it in different jira?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #828: HDDS-1538. Update ozone protobuf message for ACLs. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
ajayydv commented on a change in pull request #828: HDDS-1538. Update ozone 
protobuf message for ACLs. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/828#discussion_r285695497
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -47,16 +52,37 @@ public OzoneAcl() {
*
* @param type - Type
* @param name - Name of user
-   * @param rights - Rights
+   * @param acl - Rights
*/
-  public OzoneAcl(OzoneACLType type, String name, OzoneACLRights rights) {
+  public OzoneAcl(ACLIdentityType type, String name, ACLType acl) {
 this.name = name;
-this.rights = rights;
+this.rights = new ArrayList<>();
+this.rights.add(acl);
 this.type = type;
-if (type == OzoneACLType.WORLD && name.length() != 0) {
+if (type == ACLIdentityType.WORLD && name.length() != 0) {
   throw new IllegalArgumentException("Unexpected name part in world type");
 }
-if (((type == OzoneACLType.USER) || (type == OzoneACLType.GROUP))
+if (((type == ACLIdentityType.USER) || (type == ACLIdentityType.GROUP))
+&& (name.length() == 0)) {
+  throw new IllegalArgumentException("User or group name is required");
+}
+  }
+
+  /**
+   * Constructor for OzoneAcl.
+   *
+   * @param type - Type
+   * @param name - Name of user
 
 Review comment:
   I am open for this but this will require Ranger team to update there 
authorizer. Shall we handle it in different jira?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #828: HDDS-1538. Update ozone protobuf message for ACLs. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
ajayydv commented on a change in pull request #828: HDDS-1538. Update ozone 
protobuf message for ACLs. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/828#discussion_r285694947
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
 ##
 @@ -56,8 +56,6 @@
 
   public static final String OZONE_ACL_READ = "r";
 
 Review comment:
   This is already there in form of following constants. 
 public static final String OZONE_ACL_CREATE = "c";
 public static final String OZONE_ACL_READ_ACL = "x";
 public static final String OZONE_ACL_WRITE_ACL = "y";


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on issue #799: HDDS-1451 : SCMBlockManager findPipeline and createPipeline are not lock protected.

2019-05-20 Thread GitBox
avijayanhwx commented on issue #799: HDDS-1451 : SCMBlockManager findPipeline 
and createPipeline are not lock protected.
URL: https://github.com/apache/hadoop/pull/799#issuecomment-494074290
 
 
   > @avijayanhwx the patch looks good to me. Can you please confirm that the 
test failures are not due to this patch. Thanks in Advance.
   
   @anuengineer The failures seem unrelated. I have rebased with latest trunk. 
We can see how this run goes and then commit it. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16321) ITestS3ASSL+TestOpenSSLSocketFactory failing with java.lang.UnsatisfiedLinkErrors

2019-05-20 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844123#comment-16844123
 ] 

Steve Loughran commented on HADOOP-16321:
-

Its failing because there's no native library, so its not loading. Getting 
things to build on macos is "troublesome" to the extent that native libs are 
generally not built. Doesn't normally surface as an issue, though it does block 
you bringing up yarn in kerberos mode, for various reasons

> ITestS3ASSL+TestOpenSSLSocketFactory failing with 
> java.lang.UnsatisfiedLinkErrors
> -
>
> Key: HADOOP-16321
> URL: https://issues.apache.org/jira/browse/HADOOP-16321
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.3.0
> Environment: macos
>Reporter: Steve Loughran
>Assignee: Sahil Takiar
>Priority: Major
>
> the new test of HADOOP-16050  {{ITestS3ASSL}} is failing with 
> {{java.lang.UnsatisfiedLinkError}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lokeshj1703 commented on a change in pull request #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-05-20 Thread GitBox
lokeshj1703 commented on a change in pull request #714: HDDS-1406. Avoid usage 
of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r285682170
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
 ##
 @@ -24,35 +24,75 @@
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.ReplicationFactor;
 import org.apache.hadoop.hdds.protocol.proto.HddsProtos.NodeState;
 import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.client.HddsClientUtils;
 import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.ContainerPlacementPolicy;
 import 
org.apache.hadoop.hdds.scm.container.placement.algorithms.SCMContainerPlacementRandom;
 import org.apache.hadoop.hdds.scm.node.NodeManager;
 import org.apache.hadoop.hdds.scm.pipeline.Pipeline.PipelineState;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.io.MultipleIOException;
+import org.apache.ratis.RatisHelper;
+import org.apache.ratis.client.RaftClient;
+import org.apache.ratis.grpc.GrpcTlsConfig;
+import org.apache.ratis.protocol.RaftClientReply;
+import org.apache.ratis.protocol.RaftGroup;
+import org.apache.ratis.protocol.RaftPeer;
+import org.apache.ratis.retry.RetryPolicy;
+import org.apache.ratis.rpc.SupportedRpcType;
+import org.apache.ratis.util.TimeDuration;
+import org.apache.ratis.util.function.CheckedBiConsumer;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
 import java.lang.reflect.Constructor;
 import java.lang.reflect.InvocationTargetException;
+import java.util.ArrayList;
+import java.util.Collections;
 import java.util.HashSet;
 import java.util.List;
 import java.util.Set;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ForkJoinPool;
+import java.util.concurrent.ForkJoinWorkerThread;
+import java.util.concurrent.RejectedExecutionException;
 import java.util.stream.Collectors;
 
 /**
  * Implements Api for creating ratis pipelines.
  */
 public class RatisPipelineProvider implements PipelineProvider {
 
+  private static final Logger LOG =
+  LoggerFactory.getLogger(RatisPipelineProvider.class);
+
   private final NodeManager nodeManager;
   private final PipelineStateManager stateManager;
   private final Configuration conf;
 
+  // Set parallelism at 3, as now in Ratis we create 1 and 3 node pipelines.
+  private final int parallelisimForPool = 3;
 
 Review comment:
   There is a typo. parallelis'i'mForPool.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lokeshj1703 commented on a change in pull request #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-05-20 Thread GitBox
lokeshj1703 commented on a change in pull request #714: HDDS-1406. Avoid usage 
of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r285681320
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
 ##
 @@ -133,7 +173,86 @@ public Pipeline create(ReplicationFactor factor,
 .build();
   }
 
+
+  @Override
+  public void shutdown() {
+forkJoinPool.shutdownNow();
 
 Review comment:
   @bharatviswa504 I agree. We need to use shutdownNow but we also need to use 
awaitTermination. shutdownNow would interrupt the running tasks but the running 
task should handle the interrupt. If the task does not exit on interrupt, it is 
a better idea to wait for the task to finish.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv opened a new pull request #834: HDDS-1065. OM and DN should persist SCM certificate as the trust root. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
ajayydv opened a new pull request #834: HDDS-1065. OM and DN should persist SCM 
certificate as the trust root. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/834
 
 
   
   (cherry picked from commit 7254bf06e66deaf4dd9d00e65fc8894bd869a797)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16321) ITestS3ASSL+TestOpenSSLSocketFactory failing with java.lang.UnsatisfiedLinkErrors

2019-05-20 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844090#comment-16844090
 ] 

Sahil Takiar commented on HADOOP-16321:
---

I don't actually know why the {{UnsatisfiedLinkError}} is thrown in the first 
place, it would be nice to fix that on OSX so tests that invoke native code can 
run.

> ITestS3ASSL+TestOpenSSLSocketFactory failing with 
> java.lang.UnsatisfiedLinkErrors
> -
>
> Key: HADOOP-16321
> URL: https://issues.apache.org/jira/browse/HADOOP-16321
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.3.0
> Environment: macos
>Reporter: Steve Loughran
>Assignee: Sahil Takiar
>Priority: Major
>
> the new test of HADOOP-16050  {{ITestS3ASSL}} is failing with 
> {{java.lang.UnsatisfiedLinkError}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on issue #825: HDDS-1449. JVM Exit in datanode while committing a key. Contributed by Mukul Kumar Singh.

2019-05-20 Thread GitBox
bshashikant commented on issue #825: HDDS-1449. JVM Exit in datanode while 
committing a key. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/825#issuecomment-494057436
 
 
   Thanks @mukul1987 for updating the patch. The patch looks good to me but 
TestContainerPersistence#testDeleteBlockTwice failure seems to be related. Can 
you please check?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lokeshj1703 commented on a change in pull request #714: HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.

2019-05-20 Thread GitBox
lokeshj1703 commented on a change in pull request #714: HDDS-1406. Avoid usage 
of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r285672796
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
 ##
 @@ -133,7 +173,86 @@ public Pipeline create(ReplicationFactor factor,
 .build();
   }
 
+
+  @Override
+  public void shutdown() {
+forkJoinPool.shutdownNow();
+  }
+
   protected void initializePipeline(Pipeline pipeline) throws IOException {
-RatisPipelineUtils.createPipeline(pipeline, conf);
+createPipeline(pipeline);
+  }
+
+  /**
+   * Sends ratis command to create pipeline on all the datanodes.
+   *
+   * @param pipeline  - Pipeline to be created
+   * @throws IOException if creation fails
+   */
+  public void createPipeline(Pipeline pipeline)
+  throws IOException {
+final RaftGroup group = RatisHelper.newRaftGroup(pipeline);
+LOG.debug("creating pipeline:{} with {}", pipeline.getId(), group);
+callRatisRpc(pipeline.getNodes(),
+(raftClient, peer) -> {
+  RaftClientReply reply = raftClient.groupAdd(group, peer.getId());
+  if (reply == null || !reply.isSuccess()) {
+String msg = "Pipeline initialization failed for pipeline:"
++ pipeline.getId() + " node:" + peer.getId();
+LOG.error(msg);
+throw new IOException(msg);
+  }
+});
+  }
+
+  private void callRatisRpc(List datanodes,
+  CheckedBiConsumer< RaftClient, RaftPeer, IOException> rpc)
+  throws IOException {
+if (datanodes.isEmpty()) {
+  return;
+}
+
+final String rpcType = conf
+.get(ScmConfigKeys.DFS_CONTAINER_RATIS_RPC_TYPE_KEY,
+ScmConfigKeys.DFS_CONTAINER_RATIS_RPC_TYPE_DEFAULT);
+final RetryPolicy retryPolicy = RatisHelper.createRetryPolicy(conf);
+final List< IOException > exceptions =
+Collections.synchronizedList(new ArrayList<>());
+final int maxOutstandingRequests =
+HddsClientUtils.getMaxOutstandingRequests(conf);
+final GrpcTlsConfig tlsConfig = RatisHelper.createTlsClientConfig(new
+SecurityConfig(conf));
+final TimeDuration requestTimeout =
+RatisHelper.getClientRequestTimeout(conf);
+try {
+  forkJoinPool.submit(() -> {
 
 Review comment:
   @bharatviswa504 Can you please verify this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #754: HDDS-1065. OM and DN should persist SCM certificate as the trust root. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
hadoop-yetus commented on issue #754: HDDS-1065. OM and DN should persist SCM 
certificate as the trust root. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/754#issuecomment-494053441
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 51 | https://github.com/apache/hadoop/pull/754 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/754 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-754/4/console |
   | versions | git=2.7.4 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv closed pull request #754: HDDS-1065. OM and DN should persist SCM certificate as the trust root. Contributed by Ajay Kumar.

2019-05-20 Thread GitBox
ajayydv closed pull request #754: HDDS-1065. OM and DN should persist SCM 
certificate as the trust root. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/754
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15658) Memory leak in S3AOutputStream

2019-05-20 Thread Piotr Nowojski (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844057#comment-16844057
 ] 

Piotr Nowojski commented on HADOOP-15658:
-

Thanks [~ste...@apache.org], that's great to hear :)

> Memory leak in S3AOutputStream
> --
>
> Key: HADOOP-15658
> URL: https://issues.apache.org/jira/browse/HADOOP-15658
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4
>Reporter: Piotr Nowojski
>Priority: Major
>
> S3AOutputStream by calling 
> {{org.apache.hadoop.fs.s3a.S3AFileSystem#createTmpFileForWrite}} indirectly 
> calls {{java.io.File#deleteOnExit}} and {{java.io.DeleteOnExitHook}} which 
> are known for memory leaking:
>   
>  [https://bugs.java.com/view_bug.do?bug_id=6664633]
>  [https://bugs.java.com/view_bug.do?bug_id=4872014]
>   
>  Apparently it was even fixed (same bug but in unrelated issue) for different 
> component couple of years ago 
> https://issues.apache.org/jira/browse/HADOOP-8635 . 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15658) Memory leak in S3AOutputStream

2019-05-20 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844049#comment-16844049
 ] 

Steve Loughran commented on HADOOP-15658:
-

Fixing in HADOOP-15183; the gentle S3A refactoring code is making the S3A's 
createTmpFileForWrite() call a function you can call in that context -time to 
clean it up. 

> Memory leak in S3AOutputStream
> --
>
> Key: HADOOP-15658
> URL: https://issues.apache.org/jira/browse/HADOOP-15658
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4
>Reporter: Piotr Nowojski
>Priority: Major
>
> S3AOutputStream by calling 
> {{org.apache.hadoop.fs.s3a.S3AFileSystem#createTmpFileForWrite}} indirectly 
> calls {{java.io.File#deleteOnExit}} and {{java.io.DeleteOnExitHook}} which 
> are known for memory leaking:
>   
>  [https://bugs.java.com/view_bug.do?bug_id=6664633]
>  [https://bugs.java.com/view_bug.do?bug_id=4872014]
>   
>  Apparently it was even fixed (same bug but in unrelated issue) for different 
> component couple of years ago 
> https://issues.apache.org/jira/browse/HADOOP-8635 . 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16321) ITestS3ASSL+TestOpenSSLSocketFactory failing with java.lang.UnsatisfiedLinkErrors

2019-05-20 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844045#comment-16844045
 ] 

Sahil Takiar commented on HADOOP-16321:
---

Can confirm this is environmental, since Hadoop QA is capable of running 
{{TestOpenSSLSocketFactory}} and I can run {{ITestS3ASSL}} and 
{{TestOpenSSLSocketFactory}} inside the Hadoop env docker container 
({{./start-build-env.sh}}) and on my Ubuntu 16.04.6 machine.

I can re-produce the {{UnsatisfiedLinkError}} on OSX. Interestingly, all other 
tests that use {{NativeCodeLoader#buildSupportsOpenssl}} fail with the same 
error. I think the issue is that {{#buildSupportsOpenssl}} is a native method, 
so there should really be a call to {{NativeCodeLoader#isNativeCodeLoaded}} 
first to check if {{libhadoop}} has been loaded ({{#isNativeCodeLoaded}} is not 
a native method), if not, there is no point in calling 
#buildSupportsOpenssl}} as the call will fail.

Will post a patch to fix this.

> ITestS3ASSL+TestOpenSSLSocketFactory failing with 
> java.lang.UnsatisfiedLinkErrors
> -
>
> Key: HADOOP-16321
> URL: https://issues.apache.org/jira/browse/HADOOP-16321
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.3.0
> Environment: macos
>Reporter: Steve Loughran
>Assignee: Sahil Takiar
>Priority: Major
>
> the new test of HADOOP-16050  {{ITestS3ASSL}} is failing with 
> {{java.lang.UnsatisfiedLinkError}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14621) S3A client raising ConnectionPoolTimeoutException in test

2019-05-20 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14621.
-
Resolution: Cannot Reproduce

> S3A client raising ConnectionPoolTimeoutException in test
> -
>
> Key: HADOOP-14621
> URL: https://issues.apache.org/jira/browse/HADOOP-14621
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.0.0-beta1
> Environment: Home network with 2+ other users on high bandwidth 
> activities
>Reporter: Steve Loughran
>Priority: Minor
>
> Parallel test with threads = 12 triggering connection pool timeout. 
> Hypothesis? Congested network triggering pool timeout.
> Fix? For tests, could increase pool size
> For retry logic, this should be considered retriable, even on idempotent 
> calls (as its a failure to acquire a connection



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14124) S3AFileSystem silently deletes "fake" directories when writing a file.

2019-05-20 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14124.
-
Resolution: Won't Fix

> S3AFileSystem silently deletes "fake" directories when writing a file.
> --
>
> Key: HADOOP-14124
> URL: https://issues.apache.org/jira/browse/HADOOP-14124
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 2.6.0
>Reporter: Joel Baranick
>Priority: Minor
>  Labels: filesystem, s3
>
> I realize that you guys probably have a good reason for {{S3AFileSystem}} to 
> cleanup "fake" folders when a file is written to S3.  That said, that fact 
> that it silently does this feels like a separation of concerns issue.  It 
> also leads to weird behavior issues where calls to 
> {{AmazonS3Client.getObjectMetadata}} for folders work before calling 
> {{S3AFileSystem.create}} but not after.  Also, there seems to be no mention 
> in the javadoc that the {{deleteUnnecessaryFakeDirectories}} method is 
> automatically invoked. Lastly, it seems like the goal of {{FileSystem}} 
> should be to ensure that code built on top of it is portable to different 
> implementations.  This behavior is an example of a case where this can break 
> down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13402) S3A should allow renaming to a pre-existing destination directory to move the source path under that directory, similar to HDFS.

2019-05-20 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-13402.
-
Resolution: Won't Fix

> S3A should allow renaming to a pre-existing destination directory to move the 
> source path under that directory, similar to HDFS.
> 
>
> Key: HADOOP-13402
> URL: https://issues.apache.org/jira/browse/HADOOP-13402
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Rajesh Balamohan
>Priority: Minor
>
> In HDFS, a rename to a destination path that is a pre-existing directory is 
> interpreted as moving the source path relative to that pre-existing 
> directory.  In S3A, this operation currently fails (does nothing and returns 
> {{false}}), unless that destination directory is empty.  This issue proposes 
> to change S3A to allow this behavior, so that it more closely matches the 
> semantics of HDFS and other file systems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16085) S3Guard: use object version or etags to protect against inconsistent read after replace/overwrite

2019-05-20 Thread Ben Roling (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843993#comment-16843993
 ] 

Ben Roling commented on HADOOP-16085:
-

Thanks Steve for all your help!

> S3Guard: use object version or etags to protect against inconsistent read 
> after replace/overwrite
> -
>
> Key: HADOOP-16085
> URL: https://issues.apache.org/jira/browse/HADOOP-16085
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Ben Roling
>Assignee: Ben Roling
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16085-003.patch, HADOOP-16085_002.patch, 
> HADOOP-16085_3.2.0_001.patch
>
>
> Currently S3Guard doesn't track S3 object versions.  If a file is written in 
> S3A with S3Guard and then subsequently overwritten, there is no protection 
> against the next reader seeing the old version of the file instead of the new 
> one.
> It seems like the S3Guard metadata could track the S3 object version.  When a 
> file is created or updated, the object version could be written to the 
> S3Guard metadata.  When a file is read, the read out of S3 could be performed 
> by object version, ensuring the correct version is retrieved.
> I don't have a lot of direct experience with this yet, but this is my 
> impression from looking through the code.  My organization is looking to 
> shift some datasets stored in HDFS over to S3 and is concerned about this 
> potential issue as there are some cases in our codebase that would do an 
> overwrite.
> I imagine this idea may have been considered before but I couldn't quite 
> track down any JIRAs discussing it.  If there is one, feel free to close this 
> with a reference to it.
> Am I understanding things correctly?  Is this idea feasible?  Any feedback 
> that could be provided would be appreciated.  We may consider crafting a 
> patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16321) ITestS3ASSL+TestOpenSSLSocketFactory failing with java.lang.UnsatisfiedLinkErrors

2019-05-20 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar reassigned HADOOP-16321:
-

Assignee: Sahil Takiar

> ITestS3ASSL+TestOpenSSLSocketFactory failing with 
> java.lang.UnsatisfiedLinkErrors
> -
>
> Key: HADOOP-16321
> URL: https://issues.apache.org/jira/browse/HADOOP-16321
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.3.0
> Environment: macos
>Reporter: Steve Loughran
>Assignee: Sahil Takiar
>Priority: Major
>
> the new test of HADOOP-16050  {{ITestS3ASSL}} is failing with 
> {{java.lang.UnsatisfiedLinkError}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #833: HDDS-1502. Add metrics for Ozone Ratis performance.

2019-05-20 Thread GitBox
hadoop-yetus commented on issue #833: HDDS-1502. Add metrics for Ozone Ratis 
performance.
URL: https://github.com/apache/hadoop/pull/833#issuecomment-493972530
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 923 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 457 | trunk passed |
   | +1 | compile | 217 | trunk passed |
   | +1 | checkstyle | 54 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 867 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 145 | trunk passed |
   | 0 | spotbugs | 333 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 580 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 502 | the patch passed |
   | +1 | compile | 259 | the patch passed |
   | +1 | javac | 259 | the patch passed |
   | -0 | checkstyle | 34 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 722 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | the patch passed |
   | +1 | findbugs | 595 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 219 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1787 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 7748 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerCommandHandler
 |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.container.common.transport.server.ratis.TestCSMMetrics |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.om.TestOmAcls |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.om.TestOMDbCheckpointServlet |
   |   | hadoop.ozone.ozShell.TestOzoneShell |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-833/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/833 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2c6ce6e248b1 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0d1d7c8 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-833/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-833/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-833/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-833/1/testReport/ |
   | Max. process+thread count | 3780 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-833/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16203) ITestS3AContractGetFileStatusV1List may have consistency issues

2019-05-20 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843858#comment-16843858
 ] 

Steve Loughran commented on HADOOP-16203:
-

more detailed listing. The output of a a previous test case in the same file is 
surfacing.
{code}
[ERROR] Failures: 
[ERROR]   
ITestS3AContractGetFileStatusV1List>AbstractContractGetFileStatusTest.testListLocatedStatusEmptyDirectory:132->Assert.assertEquals:645->Assert.failNotEquals:834->Assert.fail:88
 listLocatedStatus(test dir): directory count in 2 directories and 0 files 
expected:<1> but was:<2>
[ERROR]   
ITestS3AContractGetFileStatusV1List>AbstractContractGetFileStatusTest.testListLocatedStatusFiltering:490->AbstractContractGetFileStatusTest.verifyListStatus:525->Assert.assertEquals:645->Assert.failNotEquals:834->Assert.fail:88
 length of listStatus(s3a://hwdev-steve-ireland-new/fork-0009/test, 
org.apache.hadoop.fs.contract.AbstractContractGetFileStatusTest$AllPathsFilter@48c57156
 ) [S3AFileStatus{path=s3a://hwdev-steve-ireland-new/fork-0009/test/file-2.txt; 
isDirectory=false; length=0; replication=1; blocksize=33554432; 
modification_time=1558300045000; access_time=0; owner=stevel; group=stevel; 
permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=true; 
isErasureCoded=false} isEmptyDirectory=FALSE eTag=null versionId=null, 
S3AFileStatus{path=s3a://hwdev-steve-ireland-new/fork-0009/test/file-1.txt; 
isDirectory=false; length=0; replication=1; blocksize=33554432; 
modification_time=1558300045000; access_time=0; owner=stevel; group=stevel; 
permission=rw-rw-rw-; isSymlink=false; hasAcl=false; isEncrypted=true; 
isErasureCoded=false} isEmptyDirectory=FALSE eTag=null versionId=null, 
S3AFileStatus{path=s3a://hwdev-steve-ireland-new/fork-0009/test/ITestS3AContractDistCp;
 isDirectory=true; modification_time=0; access_time=0; owner=stevel; 
group=stevel; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; 
isEncrypted=true; isErasureCoded=false} isEmptyDirectory=UNKNOWN eTag=null 
versionId=null] expected:<2> but was:<3>
[ERROR] Errors: 
[ERROR]   ITestS3ASSL.testOpenSSL:43 » UnsatisfiedLink 
org.apache.hadoop.util.NativeCode...
[ERROR]   
ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testBucketInfoUnguarded:342
 » FileNotFound
[INFO] 

{code}

Problem is the test base is being listed, so list inconsistencies can surface. 
the base contract test suite needs to create an empty dir for listing

> ITestS3AContractGetFileStatusV1List may have consistency issues
> ---
>
> Key: HADOOP-16203
> URL: https://issues.apache.org/jira/browse/HADOOP-16203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Seeing in a failure in the listing tests which looks like it could suffer 
> from some consistency/concurrency issues: the path used is chosen from the 
> method name, but with two subclasses of the 
> {{AbstractContractGetFileStatusTest}} suite, the S3A tests could be 
> interfering.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16203) ITestS3AContractGetFileStatusV1List may have consistency issues

2019-05-20 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16203:

Component/s: test

> ITestS3AContractGetFileStatusV1List may have consistency issues
> ---
>
> Key: HADOOP-16203
> URL: https://issues.apache.org/jira/browse/HADOOP-16203
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
>
> Seeing in a failure in the listing tests which looks like it could suffer 
> from some consistency/concurrency issues: the path used is chosen from the 
> method name, but with two subclasses of the 
> {{AbstractContractGetFileStatusTest}} suite, the S3A tests could be 
> interfering.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13656) fs -expunge to take a filesystem

2019-05-20 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843855#comment-16843855
 ] 

Steve Loughran commented on HADOOP-13656:
-


Tests LGTM; the HDFS test failures are clearly unrelated.

Can you look at the checkstyle warnings other than those about nested braces. 
That's existing code and can be ignored.

> fs -expunge to take a filesystem
> 
>
> Key: HADOOP-13656
> URL: https://issues.apache.org/jira/browse/HADOOP-13656
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Shweta
>Priority: Minor
> Attachments: HADOOP-13656.001.patch, HADOOP-13656.002.patch, 
> HADOOP-13656.003.patch, HADOOP-13656.004.patch, HADOOP-13656.005.patch, 
> HADOOP-13656.006.patch
>
>
> you can't pass in a filesystem or object store to {{fs -expunge}; you have to 
> change the default fs
> {code}
> hadoop fs -expunge -D fs.defaultFS=s3a://bucket/
> {code}
> If the command took an optional filesystem argument, it'd be better at 
> cleaning up object stores. Given that even deleted object store data runs up 
> bills, this could be appreciated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16320) Workaround bug with commons-configuration to be able to emit Ganglia metrics to multiple sink servers

2019-05-20 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843853#comment-16843853
 ] 

Steve Loughran commented on HADOOP-16320:
-

Thomas: you can now submit github PRs, which is a lot easier for incremental 
development and review

> Workaround bug with commons-configuration to be able to emit Ganglia metrics 
> to multiple sink servers
> -
>
> Key: HADOOP-16320
> URL: https://issues.apache.org/jira/browse/HADOOP-16320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.9.2, 2.9.3
>Reporter: Thomas Poepping
>Assignee: Thomas Poepping
>Priority: Minor
> Attachments: HADOOP-16320.patch
>
>
> AbstractGangliaSink is used by the hadoop-metrics2 package to emit metrics to 
> Ganglia. Currently, this class uses the apache commons-configuration package 
> to read from the hadoop-metrics2.properties file. commons-configuration is 
> outdated, and has a bug where the .getString function drops everything after 
> the first comma. This change uses .getList instead, which will work for one 
> or many Ganglia sink servers.
>  
> This is fixed in trunk by upgrading to commons-configuration2, which doesn't 
> have the bug anymore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16317) ABFS: improve random read performance

2019-05-20 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843852#comment-16843852
 ] 

Steve Loughran commented on HADOOP-16317:
-

Be happy to talk about this some time, with the lessons of the S3A work. FWIW, 
I think we need to revisit some of those assumptions of the S3A connector, and 
do so based on more recent trace data from Hive/Spark/Impala queries of both 
ORC and Parquet. 

* HADOOP-13203 was driven by ORC+ Hive data from 2016; Hive optimisations may 
have obsoleted those
* Parquet seems to use different read APIs and I don't have trace data there
* [~stakiar] is looking at Impala perf against stores; again there's a new
 * HADOOP-15229 adds an openFile() call where you can pass down config options. 
S3A takes that fs.s3a.experimental.fadvise policy -- if you were to add 
something similar to ABFS then we could declare a standard option for 
cross-store use. And you can provide an async HEAD probe for faster opening.
* HADOOP-11867 looks at a vector read API ; there's an ABFS dependent. If we 
can move ORC and Parquet to that API, then it will line you up for the ability 
to make decisions in your connector for how best to do the reads (reorder, 
merge, submit as parallel GETs, use HTTP/2, etc).  

I'm not doing any work on HADOOP-11867, and I don't know anyone else who is, 
though I know people who would like it. If you were willing to go that way 
-work up the stack rather than just in the connector, dealing with the minimal 
sequential information coming from the apps today, you'd have an opportunity to 
do profound things. 



> ABFS: improve random read performance
> -
>
> Key: HADOOP-16317
> URL: https://issues.apache.org/jira/browse/HADOOP-16317
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Priority: Major
>
> Improving random read performance is an interesting topic. ABFS doesn't 
> perform well when reading column format files as the process involves with 
> many seek operations which make the readAhead no use, and if readAhead is 
> used unwisely it would lead to unnecessary data request.
> Hence creating this Jira as a reminder to track the investigation and 
> progress of the work.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant opened a new pull request #833: HDDS-1502. Add metrics for Ozone Ratis performance.

2019-05-20 Thread GitBox
bshashikant opened a new pull request #833: HDDS-1502. Add metrics for Ozone 
Ratis performance.
URL: https://github.com/apache/hadoop/pull/833
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 commented on issue #825: HDDS-1449. JVM Exit in datanode while committing a key. Contributed by Mukul Kumar Singh.

2019-05-20 Thread GitBox
mukul1987 commented on issue #825: HDDS-1449. JVM Exit in datanode while 
committing a key. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/825#issuecomment-493926256
 
 
   Thanks for the review @bshashikant.
   I have changed the name of the variable to checkNoReferences in place of 
force. As there are no users of the force flag, I feel we can address that 
later.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #826: HDDS-1517. AllocateBlock call fails with ContainerNotFoundException.

2019-05-20 Thread GitBox
bshashikant commented on a change in pull request #826: HDDS-1517. 
AllocateBlock call fails with ContainerNotFoundException.
URL: https://github.com/apache/hadoop/pull/826#discussion_r285500751
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
 ##
 @@ -386,18 +386,17 @@ public ContainerInfo getMatchingContainer(final long 
sizeRequired,
 
   public ContainerInfo getMatchingContainer(final long sizeRequired,
   String owner, Pipeline pipeline, List excludedContainers) {
+NavigableSet containerIDs;
 try {
-  //TODO: #CLUTIL See if lock is required here
-  NavigableSet containerIDs =
-  pipelineManager.getContainersInPipeline(pipeline.getId());
+  synchronized (pipeline) {
+//TODO: #CLUTIL See if lock is required here
+containerIDs =
+pipelineManager.getContainersInPipeline(pipeline.getId());
 
-  containerIDs = getContainersForOwner(containerIDs, owner);
-  if (containerIDs.size() < numContainerPerOwnerInPipeline) {
-synchronized (pipeline) {
+containerIDs = getContainersForOwner(containerIDs, owner);
+if (containerIDs.size() < numContainerPerOwnerInPipeline) {
   // TODO: #CLUTIL Maybe we can add selection logic inside synchronized
   // as well
-  containerIDs = getContainersForOwner(
-  pipelineManager.getContainersInPipeline(pipeline.getId()), 
owner);
   if (containerIDs.size() < numContainerPerOwnerInPipeline) {
 ContainerInfo containerInfo =
 containerStateManager.allocateContainer(pipelineManager, owner,
 
 Review comment:
   The SCM cli option will exercise this code path, which has an option to 
create the container. I think we can disable createContainer call from SCMCLI 
as it does not get used. What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on issue #825: HDDS-1449. JVM Exit in datanode while committing a key. Contributed by Mukul Kumar Singh.

2019-05-20 Thread GitBox
bshashikant commented on issue #825: HDDS-1449. JVM Exit in datanode while 
committing a key. Contributed by Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/825#issuecomment-493900456
 
 
   Thanks @mukul1987 for working on this. The patch overall looks good to me. 
Some comments :
   1) When want to force evict the container cache, it should ideally not check 
for the reference count to be 0.
   2) We should have two methods, one to actually evict the cache forcefully 
which will not validate the the reference count to be 0 and one which actually 
validates  the reference count to be 0 before removing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14468) S3Guard: make short-circuit getFileStatus() configurable

2019-05-20 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14468:

Release Note:   (was: Resolved this with HADOOP-15999 : fix for OOB 
operations.)

> S3Guard: make short-circuit getFileStatus() configurable
> 
>
> Key: HADOOP-14468
> URL: https://issues.apache.org/jira/browse/HADOOP-14468
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
>
> Currently, when S3Guard is enabled, getFileStatus() will skip S3 if it gets a 
> result from the MetadataStore (e.g. dynamodb) first.
> I would like to add a new parameter 
> {{fs.s3a.metadatastore.getfilestatus.authoritative}} which, when true, keeps 
> the current behavior.  When false, S3AFileSystem will check both S3 and the 
> MetadataStore.
> I'm not sure yet if we want to have this behavior the same for all callers of 
> getFileStatus(), or if we only want to check both S3 and MetadataStore for 
> some internal callers such as open().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14468) S3Guard: make short-circuit getFileStatus() configurable

2019-05-20 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16843753#comment-16843753
 ] 

Gabor Bota commented on HADOOP-14468:
-

Resolved this with HADOOP-15999 : fix for OOB operations.

> S3Guard: make short-circuit getFileStatus() configurable
> 
>
> Key: HADOOP-14468
> URL: https://issues.apache.org/jira/browse/HADOOP-14468
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
>
> Currently, when S3Guard is enabled, getFileStatus() will skip S3 if it gets a 
> result from the MetadataStore (e.g. dynamodb) first.
> I would like to add a new parameter 
> {{fs.s3a.metadatastore.getfilestatus.authoritative}} which, when true, keeps 
> the current behavior.  When false, S3AFileSystem will check both S3 and the 
> MetadataStore.
> I'm not sure yet if we want to have this behavior the same for all callers of 
> getFileStatus(), or if we only want to check both S3 and MetadataStore for 
> some internal callers such as open().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14468) S3Guard: make short-circuit getFileStatus() configurable

2019-05-20 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota resolved HADOOP-14468.
-
  Resolution: Fixed
Release Note: Resolved this with HADOOP-15999 : fix for OOB operations.

> S3Guard: make short-circuit getFileStatus() configurable
> 
>
> Key: HADOOP-14468
> URL: https://issues.apache.org/jira/browse/HADOOP-14468
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
>
> Currently, when S3Guard is enabled, getFileStatus() will skip S3 if it gets a 
> result from the MetadataStore (e.g. dynamodb) first.
> I would like to add a new parameter 
> {{fs.s3a.metadatastore.getfilestatus.authoritative}} which, when true, keeps 
> the current behavior.  When false, S3AFileSystem will check both S3 and the 
> MetadataStore.
> I'm not sure yet if we want to have this behavior the same for all callers of 
> getFileStatus(), or if we only want to check both S3 and MetadataStore for 
> some internal callers such as open().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org