[GitHub] [hadoop-ozone] anuengineer commented on a change in pull request #86: HDDS-2329 Destroy pipelines on any decommission or maintenance nodes

2019-11-02 Thread GitBox
anuengineer commented on a change in pull request #86: HDDS-2329 Destroy 
pipelines on any decommission or maintenance nodes
URL: https://github.com/apache/hadoop-ozone/pull/86#discussion_r341825935
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeAdminMonitor.java
 ##
 @@ -0,0 +1,269 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.node;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import 
org.apache.hadoop.ozone.common.statemachine.InvalidStateTransitionException;
+import org.apache.hadoop.ozone.common.statemachine.StateMachine;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Queue;
+import java.util.ArrayDeque;
+import java.util.HashSet;
+import java.util.Set;
+
+/**
+ * Monitor thread which watches for nodes to be decommissioned, recommissioned
+ * or placed into maintenance. Newly added nodes are queued in pendingNodes
+ * and recommissoned nodes are queued in cancelled nodes. On each monitor
+ * 'tick', the cancelled nodes are processed and removed from the monitor.
+ * Then any pending nodes are added to the trackedNodes set, where they stay
+ * until decommission or maintenance has ended.
+ *
+ * Once an node is placed into tracked nodes, it goes through a workflow where
+ * the following happens:
+ *
+ * 1. First an event is fired to close any pipelines on the node, which will
+ *also close any contaners.
+ * 2. Next the containers on the node are obtained and checked to see if new
+ *replicas are needed. If so, the new replicas are scheduled.
+ * 3. After scheduling replication, the node remains pending until replication
+ *has completed.
+ * 4. At this stage the node will complete decommission or enter maintenance.
+ * 5. Maintenance nodes will remain tracked by this monitor until maintenance
+ *is manually ended, or the maintenance window expires.
+ */
+public class DatanodeAdminMonitor implements DatanodeAdminMonitorInterface {
+
+  private OzoneConfiguration conf;
+  private EventPublisher eventQueue;
+  private NodeManager nodeManager;
+  private PipelineManager pipelineManager;
+  private Queue pendingNodes = new ArrayDeque();
+  private Queue cancelledNodes = new ArrayDeque();
+  private Set trackedNodes = new HashSet<>();
+  private StateMachine workflowSM;
+
+  /**
+   * States that a node must pass through when being decommissioned or placed
+   * into maintenance.
+   */
+  public enum States {
+CLOSE_PIPELINES, GET_CONTAINERS, REPLICATE_CONTAINERS,
 
 Review comment:
   Also we might want to add a detailed Ascii based state machine diagram for 
the future readers.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on a change in pull request #86: HDDS-2329 Destroy pipelines on any decommission or maintenance nodes

2019-11-02 Thread GitBox
anuengineer commented on a change in pull request #86: HDDS-2329 Destroy 
pipelines on any decommission or maintenance nodes
URL: https://github.com/apache/hadoop-ozone/pull/86#discussion_r341825865
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeAdminMonitor.java
 ##
 @@ -0,0 +1,269 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.node;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import 
org.apache.hadoop.ozone.common.statemachine.InvalidStateTransitionException;
+import org.apache.hadoop.ozone.common.statemachine.StateMachine;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Queue;
+import java.util.ArrayDeque;
+import java.util.HashSet;
+import java.util.Set;
+
+/**
+ * Monitor thread which watches for nodes to be decommissioned, recommissioned
+ * or placed into maintenance. Newly added nodes are queued in pendingNodes
+ * and recommissoned nodes are queued in cancelled nodes. On each monitor
+ * 'tick', the cancelled nodes are processed and removed from the monitor.
+ * Then any pending nodes are added to the trackedNodes set, where they stay
+ * until decommission or maintenance has ended.
+ *
+ * Once an node is placed into tracked nodes, it goes through a workflow where
+ * the following happens:
+ *
+ * 1. First an event is fired to close any pipelines on the node, which will
+ *also close any contaners.
 
 Review comment:
   typo: contaners -> containers


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on a change in pull request #86: HDDS-2329 Destroy pipelines on any decommission or maintenance nodes

2019-11-02 Thread GitBox
anuengineer commented on a change in pull request #86: HDDS-2329 Destroy 
pipelines on any decommission or maintenance nodes
URL: https://github.com/apache/hadoop-ozone/pull/86#discussion_r341825916
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeAdminMonitor.java
 ##
 @@ -0,0 +1,269 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.node;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import 
org.apache.hadoop.ozone.common.statemachine.InvalidStateTransitionException;
+import org.apache.hadoop.ozone.common.statemachine.StateMachine;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Queue;
+import java.util.ArrayDeque;
+import java.util.HashSet;
+import java.util.Set;
+
+/**
+ * Monitor thread which watches for nodes to be decommissioned, recommissioned
+ * or placed into maintenance. Newly added nodes are queued in pendingNodes
+ * and recommissoned nodes are queued in cancelled nodes. On each monitor
+ * 'tick', the cancelled nodes are processed and removed from the monitor.
+ * Then any pending nodes are added to the trackedNodes set, where they stay
+ * until decommission or maintenance has ended.
+ *
+ * Once an node is placed into tracked nodes, it goes through a workflow where
+ * the following happens:
+ *
+ * 1. First an event is fired to close any pipelines on the node, which will
+ *also close any contaners.
+ * 2. Next the containers on the node are obtained and checked to see if new
+ *replicas are needed. If so, the new replicas are scheduled.
+ * 3. After scheduling replication, the node remains pending until replication
+ *has completed.
+ * 4. At this stage the node will complete decommission or enter maintenance.
+ * 5. Maintenance nodes will remain tracked by this monitor until maintenance
+ *is manually ended, or the maintenance window expires.
+ */
+public class DatanodeAdminMonitor implements DatanodeAdminMonitorInterface {
+
+  private OzoneConfiguration conf;
+  private EventPublisher eventQueue;
+  private NodeManager nodeManager;
+  private PipelineManager pipelineManager;
+  private Queue pendingNodes = new ArrayDeque();
+  private Queue cancelledNodes = new ArrayDeque();
+  private Set trackedNodes = new HashSet<>();
+  private StateMachine workflowSM;
+
+  /**
+   * States that a node must pass through when being decommissioned or placed
+   * into maintenance.
+   */
+  public enum States {
+CLOSE_PIPELINES, GET_CONTAINERS, REPLICATE_CONTAINERS,
 
 Review comment:
   Do you want to add a integer in this enum, so you can actually annotate 
which enum is first, second etc. 
   At this point it is implicit I suppose ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on a change in pull request #86: HDDS-2329 Destroy pipelines on any decommission or maintenance nodes

2019-11-02 Thread GitBox
anuengineer commented on a change in pull request #86: HDDS-2329 Destroy 
pipelines on any decommission or maintenance nodes
URL: https://github.com/apache/hadoop-ozone/pull/86#discussion_r341826121
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DatanodeAdminMonitor.java
 ##
 @@ -0,0 +1,269 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.node;
+
+import com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.scm.events.SCMEvents;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineManager;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import 
org.apache.hadoop.ozone.common.statemachine.InvalidStateTransitionException;
+import org.apache.hadoop.ozone.common.statemachine.StateMachine;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Queue;
+import java.util.ArrayDeque;
+import java.util.HashSet;
+import java.util.Set;
+
+/**
+ * Monitor thread which watches for nodes to be decommissioned, recommissioned
+ * or placed into maintenance. Newly added nodes are queued in pendingNodes
+ * and recommissoned nodes are queued in cancelled nodes. On each monitor
+ * 'tick', the cancelled nodes are processed and removed from the monitor.
+ * Then any pending nodes are added to the trackedNodes set, where they stay
+ * until decommission or maintenance has ended.
+ *
+ * Once an node is placed into tracked nodes, it goes through a workflow where
+ * the following happens:
+ *
+ * 1. First an event is fired to close any pipelines on the node, which will
+ *also close any contaners.
+ * 2. Next the containers on the node are obtained and checked to see if new
+ *replicas are needed. If so, the new replicas are scheduled.
+ * 3. After scheduling replication, the node remains pending until replication
+ *has completed.
+ * 4. At this stage the node will complete decommission or enter maintenance.
+ * 5. Maintenance nodes will remain tracked by this monitor until maintenance
+ *is manually ended, or the maintenance window expires.
+ */
+public class DatanodeAdminMonitor implements DatanodeAdminMonitorInterface {
+
+  private OzoneConfiguration conf;
+  private EventPublisher eventQueue;
+  private NodeManager nodeManager;
+  private PipelineManager pipelineManager;
+  private Queue pendingNodes = new ArrayDeque();
+  private Queue cancelledNodes = new ArrayDeque();
+  private Set trackedNodes = new HashSet<>();
+  private StateMachine workflowSM;
+
+  /**
+   * States that a node must pass through when being decommissioned or placed
+   * into maintenance.
+   */
+  public enum States {
+CLOSE_PIPELINES, GET_CONTAINERS, REPLICATE_CONTAINERS,
+AWAIT_MAINTENANCE_END, COMPLETE
+  }
+
+  /**
+   * Transition events that occur to move a node from one state to the next.
+   */
+  public enum Transitions {
+COMPLETE_DECOM_STAGE, COMPLETE_MAINT_STAGE, UNEXPECTED_NODE_STATE
+  }
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(DatanodeAdminMonitor.class);
+
+  public DatanodeAdminMonitor(OzoneConfiguration config) {
+conf = config;
+initializeStateMachine();
+  }
+
+  @Override
+  public void setConf(OzoneConfiguration config) {
+conf = config;
+  }
+
+  @Override
+  public void setEventQueue(EventPublisher eventQueue) {
+this.eventQueue = eventQueue;
+  }
+
+  @Override
+  public void setNodeManager(NodeManager nm) {
+nodeManager = nm;
+  }
+
+  @Override
+  public void setPipelineManager(PipelineManager pm) {
+pipelineManager = pm;
+  }
+
+  /**
+   * Add a node to the decommission or maintenance workflow. The node will be
+   * queued and added to the workflow after a defined interval.
+   *
+   * @param dn The datanode to move into an admin state
+   * @param endInHours For nodes going into maintenance, the number of hours
+   *   from now for maintenance to automatically end. Ignored
+   *   for decommissioning nodes.
+   */
+  

[GitHub] [hadoop-ozone] anuengineer commented on a change in pull request #110: HDDS-2321. Ozone Block Token verify should not apply to all datanode …

2019-11-02 Thread GitBox
anuengineer commented on a change in pull request #110: HDDS-2321. Ozone Block 
Token verify should not apply to all datanode …
URL: https://github.com/apache/hadoop-ozone/pull/110#discussion_r341825738
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/security/token/BlockTokenVerifier.java
 ##
 @@ -55,68 +57,68 @@ private boolean isExpired(long expiryDate) {
   }
 
   @Override
-  public UserGroupInformation verify(String user, String tokenStr)
-  throws SCMSecurityException {
-if (conf.isBlockTokenEnabled()) {
-  // TODO: add audit logs.
-
-  if (Strings.isNullOrEmpty(tokenStr)) {
-throw new BlockTokenException("Fail to find any token (empty or " +
-"null.)");
-  }
-  final Token token = new Token();
-  OzoneBlockTokenIdentifier tokenId = new OzoneBlockTokenIdentifier();
-  try {
-token.decodeFromUrlString(tokenStr);
-if (LOGGER.isDebugEnabled()) {
-  LOGGER.debug("Verifying token:{} for user:{} ", token, user);
-}
-ByteArrayInputStream buf = new ByteArrayInputStream(
-token.getIdentifier());
-DataInputStream in = new DataInputStream(buf);
-tokenId.readFields(in);
-
-  } catch (IOException ex) {
-throw new BlockTokenException("Failed to decode token : " + tokenStr);
-  }
+  public void verify(String user, String tokenStr,
+  ContainerProtos.Type cmd, String id) throws SCMSecurityException {
+if (!conf.isBlockTokenEnabled() || !HddsUtils.requireOmBlockToken(cmd)) {
+  return;
+}
+
+// TODO: add audit logs.
+if (Strings.isNullOrEmpty(tokenStr)) {
+  throw new BlockTokenException("Fail to find any token (empty or " +
+  "null.)");
+}
 
-  if (caClient == null) {
-throw new SCMSecurityException("Certificate client not available " +
-"to validate token");
+final Token token = new Token();
+OzoneBlockTokenIdentifier tokenId = new OzoneBlockTokenIdentifier();
+try {
+  token.decodeFromUrlString(tokenStr);
+  if (LOGGER.isDebugEnabled()) {
+LOGGER.debug("Verifying token:{} for user:{} ", token, user);
   }
+  ByteArrayInputStream buf = new ByteArrayInputStream(
+  token.getIdentifier());
+  DataInputStream in = new DataInputStream(buf);
+  tokenId.readFields(in);
 
-  X509Certificate singerCert;
-  singerCert = caClient.getCertificate(tokenId.getOmCertSerialId());
+} catch (IOException ex) {
+  throw new BlockTokenException("Failed to decode token : " + tokenStr);
+}
 
-  if (singerCert == null) {
-throw new BlockTokenException("Can't find signer certificate " +
-"(OmCertSerialId: " + tokenId.getOmCertSerialId() +
-") of the block token for user: " + tokenId.getUser());
-  }
-  boolean validToken = caClient.verifySignature(tokenId.getBytes(),
-  token.getPassword(), singerCert);
-  if (!validToken) {
-throw new BlockTokenException("Invalid block token for user: " +
-tokenId.getUser());
-  }
+if (caClient == null) {
+  throw new SCMSecurityException("Certificate client not available " +
+  "to validate token");
+}
 
-  // check expiration
-  if (isExpired(tokenId.getExpiryDate())) {
-UserGroupInformation tokenUser = tokenId.getUser();
-tokenUser.setAuthenticationMethod(
-UserGroupInformation.AuthenticationMethod.TOKEN);
-throw new BlockTokenException("Expired block token for user: " +
-tokenUser);
-  }
-  // defer access mode, bcsid and maxLength check to container dispatcher
-  UserGroupInformation ugi = tokenId.getUser();
-  ugi.addToken(token);
-  ugi.setAuthenticationMethod(UserGroupInformation
-  .AuthenticationMethod.TOKEN);
-  return ugi;
-} else {
-  return UserGroupInformation.createRemoteUser(user);
+UserGroupInformation tokenUser = tokenId.getUser();
+X509Certificate singerCert;
 
 Review comment:
   typo: singerCert  -> signerCert?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on a change in pull request #110: HDDS-2321. Ozone Block Token verify should not apply to all datanode …

2019-11-02 Thread GitBox
anuengineer commented on a change in pull request #110: HDDS-2321. Ozone Block 
Token verify should not apply to all datanode …
URL: https://github.com/apache/hadoop-ozone/pull/110#discussion_r341825562
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
 ##
 @@ -390,6 +391,52 @@ public static boolean isReadOnly(
 }
   }
 
+  /**
+   * Not all datanode container cmd protocol has embedded ozone block token.
+   * Block token are issued by Ozone Manager and return to Ozone client to
+   * read/write data on datanode via input/output stream.
+   * Ozone datanode uses this helper to decide which command requires block
+   * token.
+   * @param cmdType
+   * @return true if it is a cmd that block token should be checked when
+   * security is enabled
+   * false if block token does not apply to the command.
+   *
+   */
+  public static boolean requireOmBlockToken(
+  ContainerProtos.Type cmdType) {
+switch (cmdType) {
+case ReadChunk:
+case GetBlock:
+case WriteChunk:
+case PutBlock:
+  return true;
+default:
+  return false;
+}
+  }
+
+  /**
+   * Return the block ID of container commands that are related to blocks.
+   * @param msg container command
+   * @return block ID.
+   */
+  public static BlockID 
getBlockID(ContainerProtos.ContainerCommandRequestProto msg) {
+switch (msg.getCmdType()) {
+case ReadChunk:
 
 Review comment:
   While you are absolutely right in assuming that if we set the command type 
to something the corresponding object must be valid, in the proto file, these 
are all optional. So check if (msg.hasReadchunk()) before the block ID call.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] anuengineer commented on a change in pull request #110: HDDS-2321. Ozone Block Token verify should not apply to all datanode …

2019-11-02 Thread GitBox
anuengineer commented on a change in pull request #110: HDDS-2321. Ozone Block 
Token verify should not apply to all datanode …
URL: https://github.com/apache/hadoop-ozone/pull/110#discussion_r341825496
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsUtils.java
 ##
 @@ -390,6 +391,52 @@ public static boolean isReadOnly(
 }
   }
 
+  /**
+   * Not all datanode container cmd protocol has embedded ozone block token.
+   * Block token are issued by Ozone Manager and return to Ozone client to
+   * read/write data on datanode via input/output stream.
+   * Ozone datanode uses this helper to decide which command requires block
+   * token.
+   * @param cmdType
+   * @return true if it is a cmd that block token should be checked when
+   * security is enabled
+   * false if block token does not apply to the command.
+   *
+   */
+  public static boolean requireOmBlockToken(
+  ContainerProtos.Type cmdType) {
+switch (cmdType) {
+case ReadChunk:
+case GetBlock:
+case WriteChunk:
+case PutBlock:
 
 Review comment:
   PutSmallFile and GetSmallFile, even though it is not used now, we should 
have them from a security perspective? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 edited a comment on issue #94: HDDS-2255. Improve Acl Handler Messages

2019-11-02 Thread GitBox
bharatviswa504 edited a comment on issue #94: HDDS-2255. Improve Acl Handler 
Messages
URL: https://github.com/apache/hadoop-ozone/pull/94#issuecomment-549061100
 
 
   Below are the find bugs that need to be fixed Jenkins reported.
   M D DLS: Dead store to result in 
org.apache.hadoop.ozone.web.ozShell.bucket.SetAclBucketHandler.call()  At 
SetAclBucketHandler.java:[line 91]
   M D DLS: Dead store to result in 
org.apache.hadoop.ozone.web.ozShell.keys.SetAclKeyHandler.call()  At 
SetAclKeyHandler.java:[line 93]
   
   Unused variable result in SetAclBucketHandler and SetAclKeyHandler.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #94: HDDS-2255. Improve Acl Handler Messages

2019-11-02 Thread GitBox
bharatviswa504 commented on issue #94: HDDS-2255. Improve Acl Handler Messages
URL: https://github.com/apache/hadoop-ozone/pull/94#issuecomment-549061100
 
 
   Below are the find bugs that need to be fixed Jenkins reported.
   M D DLS: Dead store to result in 
org.apache.hadoop.ozone.web.ozShell.bucket.SetAclBucketHandler.call()  At 
SetAclBucketHandler.java:[line 91]
   M D DLS: Dead store to result in 
org.apache.hadoop.ozone.web.ozShell.keys.SetAclKeyHandler.call()  At 
SetAclKeyHandler.java:[line 93]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2398) Remove usage of LogUtils class from ratis-common

2019-11-02 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2398.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Remove usage of LogUtils class from ratis-common
> 
>
> Key: HDDS-2398
> URL: https://issues.apache.org/jira/browse/HDDS-2398
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> MiniOzoneChaoasCluster.java for setting log level it uses LogUtils from 
> ratis-common. But this is removed from LogUtils as part of Ratis-508.
> We can avoid depending on ratis for this, and use GenericTestUtils from 
> hadoop-common test.
> LogUtils.setLogLevel(GrpcClientProtocolClient.LOG, Level.WARN);



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #113: HDDS-2398. Remove usage of LogUtils class from ratis-common.

2019-11-02 Thread GitBox
bharatviswa504 merged pull request #113: HDDS-2398. Remove usage of LogUtils 
class from ratis-common.
URL: https://github.com/apache/hadoop-ozone/pull/113
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #113: HDDS-2398. Remove usage of LogUtils class from ratis-common.

2019-11-02 Thread GitBox
bharatviswa504 commented on issue #113: HDDS-2398. Remove usage of LogUtils 
class from ratis-common.
URL: https://github.com/apache/hadoop-ozone/pull/113#issuecomment-549060188
 
 
   Thank You @xiaoyuyao for the review.
   I have committed this to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-11-02 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/493/

[Oct 31, 2019 10:30:23 PM] (jhung) YARN-9945. Fix javadoc in 
FederationProxyProviderUtil in branch-2
[Nov 1, 2019 3:35:49 AM] (jhung) Add 2.10.0 release notes for HDFS-12943
[Nov 1, 2019 4:21:33 PM] (stack) Revert "HADOOP-16598. Backport "HADOOP-16558 
[COMMON+HDFS] use
[Nov 1, 2019 4:22:23 PM] (stack) HADOOP-16598. Backport "HADOOP-16558 
[COMMON+HDFS] use




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.server.router.webapp.TestRouterWebServicesREST 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/493/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/493/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/493/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/493/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/493/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/493/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/493/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/493/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/493/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/493/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/493/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/493/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/493/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/493/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/493/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/493/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/493/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [164K]
   

[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #70: HDDS-1643. Send hostName also part of OMRequest.

2019-11-02 Thread GitBox
cxorm commented on a change in pull request #70: HDDS-1643. Send hostName also 
part of OMRequest.
URL: https://github.com/apache/hadoop-ozone/pull/70#discussion_r341801796
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMClientRequest.java
 ##
 @@ -170,6 +171,22 @@ public InetAddress getRemoteAddress() throws IOException {
 }
   }
 
+  /**
+   * Return String created from OMRequest userInfo. If userInfo is not
+   * set, returns null.
+   * @return String
+   * @throws IOException
+   */
+  @VisibleForTesting
+  public String getHostName() throws IOException {
+if (omRequest.hasUserInfo()) {
+  return InetAddress.getByName(omRequest.getUserInfo()
 
 Review comment:
   Thanks @bharatviswa504 for the comment.
   After a little time going through, I also think this line can be cleaner.
   We're going to fix related method and UT.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org