[ https://issues.apache.org/jira/browse/HDDS-2034?focusedWorklogId=320106&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-320106 ]
ASF GitHub Bot logged work on HDDS-2034: ---------------------------------------- Author: ASF GitHub Bot Created on: 29/Sep/19 03:14 Start Date: 29/Sep/19 03:14 Worklog Time Spent: 10m Work Description: ChenSammi commented on pull request #1469: HDDS-2034. Async RATIS pipeline creation and destroy through heartbea… URL: https://github.com/apache/hadoop/pull/1469#discussion_r329335686 ########## File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/HealthyPipelineSafeModeRule.java ########## @@ -49,43 +39,52 @@ * through in a cluster. */ public class HealthyPipelineSafeModeRule - extends SafeModeExitRule<PipelineReportFromDatanode>{ + extends SafeModeExitRule<Pipeline>{ public static final Logger LOG = LoggerFactory.getLogger(HealthyPipelineSafeModeRule.class); - private final PipelineManager pipelineManager; private final int healthyPipelineThresholdCount; private int currentHealthyPipelineCount = 0; - private final Set<DatanodeDetails> processedDatanodeDetails = - new HashSet<>(); HealthyPipelineSafeModeRule(String ruleName, EventQueue eventQueue, PipelineManager pipelineManager, SCMSafeModeManager manager, Configuration configuration) { super(manager, ruleName, eventQueue); - this.pipelineManager = pipelineManager; double healthyPipelinesPercent = configuration.getDouble(HddsConfigKeys. HDDS_SCM_SAFEMODE_HEALTHY_PIPELINE_THRESHOLD_PCT, HddsConfigKeys. HDDS_SCM_SAFEMODE_HEALTHY_PIPELINE_THRESHOLD_PCT_DEFAULT); + int minHealthyPipelines = 0; + + boolean createPipelineInSafemode = configuration.getBoolean( + HddsConfigKeys.HDDS_SCM_SAFEMODE_PIPELINE_CREATION, + HddsConfigKeys.HDDS_SCM_SAFEMODE_PIPELINE_CREATION_DEFAULT); + + if (createPipelineInSafemode) { + minHealthyPipelines = + configuration.getInt(HddsConfigKeys.HDDS_SCM_SAFEMODE_MIN_PIPELINE, + HddsConfigKeys.HDDS_SCM_SAFEMODE_MIN_PIPELINE_DEFAULT); + } + Preconditions.checkArgument( (healthyPipelinesPercent >= 0.0 && healthyPipelinesPercent <= 1.0), HddsConfigKeys. HDDS_SCM_SAFEMODE_HEALTHY_PIPELINE_THRESHOLD_PCT + " value should be >= 0.0 and <= 1.0"); - // As we want to wait for 3 node pipelines - int pipelineCount = + // As we want to wait for RATIS write pipelines, no matter ONE or THREE + int pipelineCount = pipelineManager.getPipelines( + HddsProtos.ReplicationType.RATIS, Pipeline.PipelineState.OPEN).size() + pipelineManager.getPipelines(HddsProtos.ReplicationType.RATIS, - HddsProtos.ReplicationFactor.THREE).size(); + Pipeline.PipelineState.ALLOCATED).size(); Review comment: Will check it. Last time I remember an integration test failure with CLOSED pipelines involved. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 320106) Time Spent: 9h 10m (was: 9h) > Async RATIS pipeline creation and destroy through heartbeat commands > -------------------------------------------------------------------- > > Key: HDDS-2034 > URL: https://issues.apache.org/jira/browse/HDDS-2034 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Reporter: Sammi Chen > Assignee: Sammi Chen > Priority: Major > Labels: pull-request-available > Time Spent: 9h 10m > Remaining Estimate: 0h > > Currently, pipeline creation and destroy are synchronous operations. SCM > directly connect to each datanode of the pipeline through gRPC channel to > create the pipeline to destroy the pipeline. > This task is to remove the gRPC channel, send pipeline creation and destroy > action through heartbeat command to each datanode. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org