churromorales commented on PR #13156:
URL: https://github.com/apache/druid/pull/13156#issuecomment-1281201211

   > > This doesn't use the resources of the overlord, we overwrite the 
resources for podSpec. The memory is derived from your JAVA_OPTS (for your 
task) basically 1.2*(Xmx + Dbb) for the memory limit and the cpu is always 1.
   > 
   > That is super great news on the memory side, but not so great on the CPU 
side for us. We have 23 continuously running ingestion tasks and they only use 
a tenth of a core (100m) each, so fixing the resource request to one core is a 
9x overprovision totaling 21 cores. A full core works for our compaction tasks, 
but those run for about 10 minutes every 4 hours.
   > 
   > We'll explore some things with Druid in the near future, so we might use 
this PR before it lands and give some feedback.
   
   I agree, we need some extra configuration, I think that can come in a 
subsequent PR as the more folks that use this, the more we will find that needs 
to be configurable.  I like the idea of having the current model be the default 
and then for users that want something more customized, perhaps we can have a 
configmap based podSpec they could load as a base template.  I think that could 
please most folks, while still providing an easy configuration for those users 
that want to just to have jobs run in k8s.  
   
   I just pushed up a commit, I had it run on a few clusters for 4-5 days and 
everything seems good.  If you want to test this patch, I would recommend using 
the latest commit from this branch as it reduces the time spent locking 
dramatically.  Let me know if you guys have any other concerns about this 
patch.  If you have already done a review, you can look at the last commit and 
see what changed.  Thanks again for the reviews. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to