FrankChen021 commented on code in PR #17738:
URL: https://github.com/apache/druid/pull/17738#discussion_r1963330088


##########
docs/development/extensions-core/k8s-jobs.md:
##########
@@ -33,13 +33,32 @@ The K8s extension builds a pod spec for each task using the 
specified pod adapte
 
 ## Configuration
 
-To use this extension please make sure to  
[include](../../configuration/extensions.md#loading-extensions)`druid-kubernetes-overlord-extensions`
 in the extensions load list for your overlord process.
+To use this extension please make sure to 
[include](../../configuration/extensions.md#loading-extensions) 
`druid-kubernetes-overlord-extensions` in the extensions load list for your 
overlord process.
 
 The extension uses `druid.indexer.runner.capacity` to limit the number of k8s 
jobs in flight. A good initial value for this would be the sum of the total 
task slots of all the middle managers you were running before switching to K8s 
based ingestion. The K8s task runner uses one thread per Job that is created, 
so setting this number too large can cause memory issues on the overlord. 
Additionally set the variable `druid.indexer.runner.namespace` to the namespace 
in which you are running druid.
 
 Other configurations required are:
 `druid.indexer.runner.type: k8s` and `druid.indexer.task.encapsulatedTask: 
true`
 
+### Running Task Pods in Another Namespace
+
+It is possible to run task pods in a different namespace from the rest of your 
Druid cluster.

Review Comment:
   That makes sense  because for non pod template mode, tasks are scheduled in 
the same ns with overlord



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to