patryk126p commented on issue #23727:
URL: https://github.com/apache/airflow/issues/23727#issuecomment-1131196192

   @dstandish we are not using kubernetes pod operators nor specifying 
`full_pod_spec`. When it comes to `executor_config` we use something like this:
   ```
   {
       "pod_override": V1Pod(
           spec=V1PodSpec(
               containers=[
                   V1Container(
                       name="base",
                       resources=V1ResourceRequirements(
                           limits={"cpu": "<X>", "memory": "<X>"},
                           requests={"cpu": "<X>", "memory": "<X>"},
                       ),
                   )
               ]
           ),
   #        metadata=V1ObjectMeta(
   #            annotations={"cluster-autoscaler.kubernetes.io/safe-to-evict": 
"false"}
   #        ),
       )
   }
   ```
   Metadata/annotations are added only in case of very long running and 
critical tasks, to be sure that kubernetes won't evict pods in the middle of 
processing.
   Sample dag may be a little tricky to provide as in almost all dags we are 
using custom operators (mostly based directly on BaseOperator), I could provide 
simplified version using PythonOperators but I'm not sure if that would be of 
much help. In majority of dags the flow is simple: collect data from API, dump 
it to S3, process (if needed), load to stage tables, load to final tables.
   In general we are not doing anything related to startup_probe and the only 
direct link between our dag/task definitions and kubernetes is the 
`executor_config` that I've provided


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to