abhishekagarwal87 commented on code in PR #13156:
URL: https://github.com/apache/druid/pull/13156#discussion_r986522182


##########
distribution/docker/peon.sh:
##########
@@ -0,0 +1,153 @@
+#!/bin/sh
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+# NOTE: this is a 'run' script for the stock tarball
+# It takes 1 required argument (the name of the service,
+# e.g. 'broker', 'historical' etc). Any additional arguments
+# are passed to that service.
+#
+# It accepts 'JAVA_OPTS' as an environment variable
+#
+# Additional env vars:
+# - DRUID_LOG4J -- set the entire log4j.xml verbatim
+# - DRUID_LOG_LEVEL -- override the default log level in default log4j
+# - DRUID_XMX -- set Java Xmx
+# - DRUID_XMS -- set Java Xms
+# - DRUID_MAXNEWSIZE -- set Java max new size
+# - DRUID_NEWSIZE -- set Java new size
+# - DRUID_MAXDIRECTMEMORYSIZE -- set Java max direct memory size
+#
+# - DRUID_CONFIG_COMMON -- full path to a file for druid 'common' properties
+# - DRUID_CONFIG_${service} -- full path to a file for druid 'service' 
properties
+
+set -e
+SERVICE="overlord"
+
+echo "$(date -Is) startup service $SERVICE"
+
+# We put all the config in /tmp/conf to allow for a
+# read-only root filesystem
+mkdir -p /tmp/conf/
+test -d /tmp/conf/druid && rm -r /tmp/conf/druid
+cp -r /opt/druid/conf/druid /tmp/conf/druid
+
+getConfPath() {
+    cluster_conf_base=/tmp/conf/druid/cluster
+    case "$1" in
+    _common) echo $cluster_conf_base/_common ;;
+    historical) echo $cluster_conf_base/data/historical ;;
+    middleManager) echo $cluster_conf_base/data/middleManager ;;
+    indexer) echo $cluster_conf_base/data/indexer ;;
+    coordinator | overlord) echo 
$cluster_conf_base/master/coordinator-overlord ;;
+    broker) echo $cluster_conf_base/query/broker ;;
+    router) echo $cluster_conf_base/query/router ;;

Review Comment:
   do we need all this for a script that is meant to launch a peon? 



##########
docs/development/extensions-contrib/k8s-jobs.md:
##########
@@ -0,0 +1,125 @@
+---
+id: k8s-jobs
+title: "MM-less Druid in K8s"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Consider this an [EXPERIMENTAL](../experimental.md) feature mostly because it 
has not been tested yet on a wide variety of long-running Druid clusters.
+
+Apache Druid Extension to enable using Kubernetes for launching and managing 
tasks instead of the Middle Managers.  This extension allows you to launch 
tasks as K8s jobs removing the need for your middle manager.  
+
+## How it works
+
+It takes the podSpec of your `Overlord` pod and creates a kubernetes job from 
this podSpec.  Thus if you have sidecars such as splunk, hubble, istio it can 
optionally launch a task as a k8s job.  All jobs are natively restorable, they 
are decopled from the druid deployment, thus restarting pods or doing upgrades 
has no affect on tasks in flight.  They will continue to run and when the 
overlord comes back up it will start tracking them again.  

Review Comment:
   ```suggestion
   It takes the podSpec of your `Overlord` pod and creates a kubernetes job 
from this podSpec.  Thus if you have sidecars such as splunk, hubble, istio it 
can optionally launch a task as a k8s job.  All jobs are natively restorable, 
they are decoupled from the druid deployment, thus restarting pods or doing 
upgrades has no effect on tasks in flight.  They will continue to run and when 
the overlord comes back up it will start tracking them again.  
   ```



##########
distribution/docker/peon.sh:
##########
@@ -0,0 +1,153 @@
+#!/bin/sh
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+# NOTE: this is a 'run' script for the stock tarball
+# It takes 1 required argument (the name of the service,
+# e.g. 'broker', 'historical' etc). Any additional arguments
+# are passed to that service.
+#
+# It accepts 'JAVA_OPTS' as an environment variable
+#
+# Additional env vars:
+# - DRUID_LOG4J -- set the entire log4j.xml verbatim
+# - DRUID_LOG_LEVEL -- override the default log level in default log4j
+# - DRUID_XMX -- set Java Xmx
+# - DRUID_XMS -- set Java Xms
+# - DRUID_MAXNEWSIZE -- set Java max new size
+# - DRUID_NEWSIZE -- set Java new size
+# - DRUID_MAXDIRECTMEMORYSIZE -- set Java max direct memory size
+#
+# - DRUID_CONFIG_COMMON -- full path to a file for druid 'common' properties
+# - DRUID_CONFIG_${service} -- full path to a file for druid 'service' 
properties
+
+set -e
+SERVICE="overlord"
+
+echo "$(date -Is) startup service $SERVICE"

Review Comment:
   why is there a separate script for peon? 



##########
indexing-service/src/main/java/org/apache/druid/indexing/common/task/AbstractTask.java:
##########
@@ -125,6 +134,49 @@ protected AbstractTask(
     this(id, groupId, taskResource, dataSource, context, IngestionMode.NONE);
   }
 
+  @Nullable
+  public String setup(TaskToolbox toolbox) throws Exception
+  {
+    File taskDir = toolbox.getConfig().getTaskDir(getId());
+    FileUtils.mkdirp(taskDir);
+    File attemptDir = Paths.get(taskDir.getAbsolutePath(), "attempt", 
toolbox.getAttemptId()).toFile();
+    FileUtils.mkdirp(attemptDir);
+    reportsFile = new File(attemptDir, "report.json");
+    log.debug("Task setup complete");
+    return null;
+  }
+
+  @Override
+  public final TaskStatus run(TaskToolbox taskToolbox) throws Exception
+  {
+    try {
+      String errorMessage = setup(taskToolbox);
+      if (org.apache.commons.lang3.StringUtils.isNotBlank(errorMessage)) {

Review Comment:
   hmm. we should have a boolean flag or we can have `setup` throw a checked 
exception. 



##########
docs/development/extensions.md:
##########
@@ -97,6 +97,7 @@ All of these community extensions can be downloaded using 
[pull-deps](../operati
 |druid-tdigestsketch|Support for approximate sketch aggregators based on 
[T-Digest](https://github.com/tdunning/t-digest)|[link](../development/extensions-contrib/tdigestsketch-quantiles.md)|
 |gce-extensions|GCE 
Extensions|[link](../development/extensions-contrib/gce-extensions.md)|
 |prometheus-emitter|Exposes [Druid metrics](../operations/metrics.md) for 
Prometheus server collection 
(https://prometheus.io/)|[link](./extensions-contrib/prometheus.md)|
+|kubernetes-overlord-extensions|Support for launching tasks in k8s, no more 
Middle Managers|[link](../development/extensions-contrib/k8s-jobs.md)|

Review Comment:
   ```suggestion
   |kubernetes-overlord-extensions|Support for launching tasks in k8s without 
Middle Managers|[link](../development/extensions-contrib/k8s-jobs.md)|
   ```



##########
docs/development/extensions-contrib/k8s-jobs.md:
##########
@@ -0,0 +1,125 @@
+---
+id: k8s-jobs
+title: "MM-less Druid in K8s"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Consider this an [EXPERIMENTAL](../experimental.md) feature mostly because it 
has not been tested yet on a wide variety of long-running Druid clusters.
+
+Apache Druid Extension to enable using Kubernetes for launching and managing 
tasks instead of the Middle Managers.  This extension allows you to launch 
tasks as K8s jobs removing the need for your middle manager.  
+
+## How it works
+
+It takes the podSpec of your `Overlord` pod and creates a kubernetes job from 
this podSpec.  Thus if you have sidecars such as splunk, hubble, istio it can 
optionally launch a task as a k8s job.  All jobs are natively restorable, they 
are decopled from the druid deployment, thus restarting pods or doing upgrades 
has no affect on tasks in flight.  They will continue to run and when the 
overlord comes back up it will start tracking them again.  
+
+## Configuration
+
+To use this extension please make sure to  
[include](../extensions.md#loading-extensions)`druid-kubernetes-overlord-extensions`
 in the extensions load list for your overlord process.
+
+The extension uses the task queue to limit how many concurrent tasks (k8s 
jobs) are in flight so it is required you have a reasonable value for 
`druid.indexer.queue.maxSize`.  Additionally set the variable 
`druid.indexer.runner.namespace` to the namespace in which you are running 
druid.
+
+Other configurations required are: 
+`druid.indexer.runner.type: k8s` and 
`druid.indexer.task.enableTaskLevelLogPush: true`
+
+You can add optional labels to your k8s jobs / pods if you need them by using 
the following configuration: 
+`druid.indexer.runner.labels: '{"key":"value"}'`
+
+All other configurations you had for the middle manager tasks must be moved 
under the overlord with one caveat, you must specify javaOpts as an array: 
+`druid.indexer.runner.javaOptsArray`, `druid.indexer.runner.javaOpts` is no 
longer supported.
+
+If you are running without a middle manager you need to also use 
`druid.processing.intermediaryData.storage.type=deepstore`
+
+Additional Configuration
+
+### Properties
+|Property|Possible Values|Description|Default|required|
+|--------|---------------|-----------|-------|--------|
+|`druid.indexer.runner.debugJobs`|`boolean`|Clean up k8s jobs after tasks 
complete.|False|No|
+|`druid.indexer.runner.sidecarSupport`|`boolean`|If your overlord pod has 
sidecars, this will attempt to start the task with the same sidecars as the 
overlord pod.|False|No|
+|`druid.indexer.runner.kubexitImage`|`String`|Used kubexit project to help 
shutdown sidecars when the main pod completes.  Otherwise jobs with sidecars 
never terminate.|karlkfi/kubexit:latest|No|
+|`druid.indexer.runner.disableClientProxy`|`boolean`|Use this if you have a 
global http(s) proxy and you wish to bypass it.|false|No|
+|`druid.indexer.runner.maxTaskDuration`|`Duration`|Max time a task is allowed 
to run for before getting killed|4H|No|
+|`druid.indexer.runner.taskCleanupDelay`|`Duration`|How long do jobs stay 
around before getting reaped from k8s|2D|No|
+|`druid.indexer.runner.taskCleanupInterval`|`Duration`|How often to check for 
jobs to be reaped|10m|No|
+|`druid.indexer.runner.k8sjobLaunchTimeout`|`Duration`|How long to wait to 
launch a k8s task before marking it as failed, on a resource constrained 
cluster it may take some time.|1H|No|
+|`druid.indexer.runner.javaOptsArray`|`Duration`|java opts for the 
task.|-Xmx1g|No|
+|`druid.indexer.runner.graceTerminationPeriodSeconds`|`Long`|Number of seconds 
you want to wait after a sigterm for container lifecycle hooks to complete.  
Keep at a smaller value if you want tasks to hold locks for shorter 
periods.|30s (k8s default)|No|
+
+### Gotchas
+
+- You must have in your role the abiliity to launch jobs.  
+- All Druid Pods belonging to one Druid cluster must be inside same kubernetes 
namespace.

Review Comment:
   It should be the same namespace under which the overlord is running. 



##########
docs/development/extensions-contrib/k8s-jobs.md:
##########
@@ -0,0 +1,125 @@
+---
+id: k8s-jobs
+title: "MM-less Druid in K8s"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Consider this an [EXPERIMENTAL](../experimental.md) feature mostly because it 
has not been tested yet on a wide variety of long-running Druid clusters.
+
+Apache Druid Extension to enable using Kubernetes for launching and managing 
tasks instead of the Middle Managers.  This extension allows you to launch 
tasks as K8s jobs removing the need for your middle manager.  
+
+## How it works
+
+It takes the podSpec of your `Overlord` pod and creates a kubernetes job from 
this podSpec.  Thus if you have sidecars such as splunk, hubble, istio it can 
optionally launch a task as a k8s job.  All jobs are natively restorable, they 
are decopled from the druid deployment, thus restarting pods or doing upgrades 
has no affect on tasks in flight.  They will continue to run and when the 
overlord comes back up it will start tracking them again.  
+
+## Configuration
+
+To use this extension please make sure to  
[include](../extensions.md#loading-extensions)`druid-kubernetes-overlord-extensions`
 in the extensions load list for your overlord process.
+
+The extension uses the task queue to limit how many concurrent tasks (k8s 
jobs) are in flight so it is required you have a reasonable value for 
`druid.indexer.queue.maxSize`.  Additionally set the variable 
`druid.indexer.runner.namespace` to the namespace in which you are running 
druid.
+
+Other configurations required are: 
+`druid.indexer.runner.type: k8s` and 
`druid.indexer.task.enableTaskLevelLogPush: true`
+
+You can add optional labels to your k8s jobs / pods if you need them by using 
the following configuration: 
+`druid.indexer.runner.labels: '{"key":"value"}'`

Review Comment:
   IMO the k8s indexer config should all be prefixed with 
`druid.indexer.runner.k8s`. 



##########
docs/development/extensions-contrib/k8s-jobs.md:
##########
@@ -0,0 +1,127 @@
+---
+id: K8s-jobs
+title: "MM-less Druid in K8s"
+---
+
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~ or more contributor license agreements.  See the NOTICE file
+  ~ distributed with this work for additional information
+  ~ regarding copyright ownership.  The ASF licenses this file
+  ~ to you under the Apache License, Version 2.0 (the
+  ~ "License"); you may not use this file except in compliance
+  ~ with the License.  You may obtain a copy of the License at
+  ~
+  ~   http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~ Unless required by applicable law or agreed to in writing,
+  ~ software distributed under the License is distributed on an
+  ~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+  ~ KIND, either express or implied.  See the License for the
+  ~ specific language governing permissions and limitations
+  ~ under the License.
+  -->
+
+Apache Druid Extension to enable using Kubernetes for launching and managing 
tasks instead of the Middle Managers.  This extension allows you to launch 
tasks as kubernetes jobs removing the need for your middle manager.  
+
+Consider this an [EXPERIMENTAL](../experimental.md) feature mostly because it 
has not been tested yet on a wide variety of long-running Druid clusters.
+
+## How it works
+
+The K8s extension takes the podSpec of your `Overlord` pod and creates a 
kubernetes job from this podSpec.  Thus if you have sidecars such as Splunk or 
Istio it can optionally launch a task as a K8s job.  All jobs are natively 
restorable, they are decoupled from the druid deployment, thus restarting pods 
or doing upgrades has no affect on tasks in flight.  They will continue to run 
and when the overlord comes back up it will start tracking them again.  
+
+## Configuration
+
+To use this extension please make sure to  
[include](../extensions.md#loading-extensions)`druid-kubernetes-overlord-extensions`
 in the extensions load list for your overlord process.
+
+The extension uses the task queue to limit how many concurrent tasks (K8s 
jobs) are in flight so it is required you have a reasonable value for 
`druid.indexer.queue.maxSize`.  Additionally set the variable 
`druid.indexer.runner.namespace` to the namespace in which you are running 
druid.
+
+Other configurations required are: 
+`druid.indexer.runner.type: K8s` and `druid.indexer.task.e  
nableTaskLevelLogPush: true`

Review Comment:
   ```suggestion
   `druid.indexer.runner.type: K8s` and 
`druid.indexer.task.enableTaskLevelLogPush: true`
   ```



##########
extensions-contrib/kubernetes-overlord-extensions/src/main/java/org/apache/druid/k8s/overlord/common/K8sTaskAdapter.java:
##########
@@ -0,0 +1,277 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.k8s.overlord.common;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Joiner;
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Iterables;
+import com.google.common.collect.Lists;
+import io.fabric8.kubernetes.api.model.Container;
+import io.fabric8.kubernetes.api.model.ContainerPort;
+import io.fabric8.kubernetes.api.model.EnvVar;
+import io.fabric8.kubernetes.api.model.EnvVarBuilder;
+import io.fabric8.kubernetes.api.model.EnvVarSourceBuilder;
+import io.fabric8.kubernetes.api.model.ObjectFieldSelector;
+import io.fabric8.kubernetes.api.model.ObjectMeta;
+import io.fabric8.kubernetes.api.model.Pod;
+import io.fabric8.kubernetes.api.model.PodSpec;
+import io.fabric8.kubernetes.api.model.PodTemplateSpec;
+import io.fabric8.kubernetes.api.model.Quantity;
+import io.fabric8.kubernetes.api.model.ResourceRequirementsBuilder;
+import io.fabric8.kubernetes.api.model.batch.v1.Job;
+import io.fabric8.kubernetes.api.model.batch.v1.JobBuilder;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.druid.indexing.common.task.Task;
+import org.apache.druid.java.util.common.HumanReadableBytes;
+import org.apache.druid.java.util.emitter.EmittingLogger;
+import org.apache.druid.k8s.overlord.KubernetesTaskRunnerConfig;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+
+/**
+ * This class transforms tasks to pods, and pods to tasks to assist with 
creating the job spec for a
+ * peon task.
+ * The two subclasses of this class are the SingleContainerTaskAdapter and the 
MultiContainerTaskAdapter
+ * This class runs on the overlord, to convert a task into a job, it will take 
its own podSpec (the current running overlord)
+ * keep volumees, secrets, env variables, config maps, etc. and add some 
additional information as well as provide a new
+ * command for running the task.
+ * The SingleContainerTaskAdapter only runs a task in a single container (no 
sidecars)
+ * The MultiContainerTaskAdapter runs with all the sidecars the current 
running overlord runs with.  Thus, it needs
+ * to add some extra coordination to shut down sidecar containers when the 
main pod exits.
+ */
+
+public abstract class K8sTaskAdapter implements TaskAdapter<Pod, Job>
+{
+
+  private static final EmittingLogger log = new 
EmittingLogger(K8sTaskAdapter.class);
+
+  protected final KubernetesClientApi client;
+  protected final KubernetesTaskRunnerConfig config;
+  protected final ObjectMapper mapper;
+
+  public K8sTaskAdapter(
+      KubernetesClientApi client,
+      KubernetesTaskRunnerConfig config,
+      ObjectMapper mapper
+  )
+  {
+    this.client = client;
+    this.config = config;
+    this.mapper = mapper;
+  }
+
+  @Override
+  public Job fromTask(Task task, PeonCommandContext context) throws IOException
+  {
+    String myPodName = System.getenv("HOSTNAME");
+    Pod pod = client.executeRequest(client -> 
client.pods().inNamespace(config.namespace).withName(myPodName).get());
+    return createJobFromPodSpec(pod.getSpec(), task, context);
+  }
+
+  @Override
+  public Task toTask(Pod from) throws IOException
+  {
+    List<EnvVar> envVars = from.getSpec().getContainers().get(0).getEnv();
+    Optional<EnvVar> taskJson = envVars.stream().filter(x -> 
"TASK_JSON".equals(x.getName())).findFirst();
+    String contents = taskJson.map(envVar -> 
taskJson.get().getValue()).orElse(null);
+    if (contents == null) {
+      throw new IOException("No TASK_JSON environment variable found in pod: " 
+ from.getMetadata().getName());
+    }
+    return mapper.readValue(Base64Compression.decompressBase64(contents), 
Task.class);
+  }
+
+  @VisibleForTesting
+  public abstract Job createJobFromPodSpec(PodSpec podSpec, Task task, 
PeonCommandContext context) throws IOException;
+
+  protected Job buildJob(
+      K8sTaskId k8sTaskId,
+      Map<String, String> labels,
+      Map<String, String> annotations, PodTemplateSpec podTemplate
+  )
+  {
+    return new JobBuilder()
+        .withNewMetadata()
+        .withName(k8sTaskId.getK8sTaskId())
+        .addToLabels(labels)
+        .addToAnnotations(annotations)
+        .endMetadata()
+        .withNewSpec()
+        .withTemplate(podTemplate)
+        
.withActiveDeadlineSeconds(config.maxTaskDuration.toStandardDuration().getStandardSeconds())
+        .withBackoffLimit(0)
+        .withTtlSecondsAfterFinished((int) 
config.taskCleanupDelay.toStandardDuration().getStandardSeconds())
+        .endSpec()
+        .build();
+  }
+
+  @VisibleForTesting
+  static Optional<Long> getJavaOptValueBytes(String qualifier, List<String> 
commands)
+  {
+    Long result = null;
+    Optional<String> lastHeapValue = commands.stream().filter(x -> 
x.startsWith(qualifier)).reduce((x, y) -> y);
+    if (lastHeapValue.isPresent()) {
+      result = 
HumanReadableBytes.parse(StringUtils.removeStart(lastHeapValue.get(), 
qualifier));
+    }
+    return Optional.ofNullable(result);
+  }
+
+  @VisibleForTesting
+  static long getContainerMemory(PeonCommandContext context)
+  {
+    List<String> javaOpts = context.getJavaOpts();
+    Optional<Long> optionalXmx = getJavaOptValueBytes("-Xmx", javaOpts);
+    long heapSize = HumanReadableBytes.parse("1g");
+    if (optionalXmx.isPresent()) {
+      heapSize = optionalXmx.get();
+    }
+    Optional<Long> optionalDbb = 
getJavaOptValueBytes("-XX:MaxDirectMemorySize=", javaOpts);
+    long dbbSize = heapSize;
+    if (optionalDbb.isPresent()) {
+      dbbSize = optionalDbb.get();
+    }
+    return (long) ((dbbSize + heapSize) * 1.2);
+
+  }
+
+  protected void setupPorts(Container mainContainer)
+  {
+    mainContainer.getPorts().clear();
+    ContainerPort tcpPort = new ContainerPort();
+    tcpPort.setContainerPort(DruidK8sConstants.PORT);
+    tcpPort.setName("druid-port");
+    tcpPort.setProtocol("TCP");
+    ContainerPort httpsPort = new ContainerPort();
+    httpsPort.setContainerPort(DruidK8sConstants.TLS_PORT);
+    httpsPort.setName("druid-tls-port");
+    httpsPort.setProtocol("TCP");
+    mainContainer.setPorts(Lists.newArrayList(httpsPort, tcpPort));
+  }
+
+  protected void addEnvironmentVariables(Container mainContainer, 
PeonCommandContext context, String taskContents)

Review Comment:
   may not be an issue but is there a limit in k8s on how large the env 
variable value can be? The task JSON could be large.  



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to