jscheffl commented on code in PR #43737:
URL: https://github.com/apache/airflow/pull/43737#discussion_r1831484881
##########
providers/src/airflow/providers/edge/cli/edge_command.py:
##########
@@ -312,11 +322,11 @@ def stop(args):
logger.warning("Could not find PID of worker.")
-ARG_CONCURRENCY = Arg(
- ("-c", "--concurrency"),
+ARG_CAPACITY = Arg(
+ ("-c", "--capacity"),
Review Comment:
I don't have a final opinion but I feel a bit like I'd favor to keep the
concurrency argument... mainly because the CLI argument is the same on
celery... but actually looking at the code it is the same intrinsic... mhm, but
an alias just for convenience... might be rather confusing...? (Sorry, just
thinking out loud)
##########
docs/apache-airflow-providers-edge/edge_executor.rst:
##########
@@ -209,6 +210,50 @@ could take thousands of tasks without a problem), or from
an environment
perspective (you want a worker running from a specific location where required
infrastructure is available).
+Capacity handling
Review Comment:
Can you add a note that if not provided default capacity is counted as 1?
Also worthwile to mention that integers are supported, not like K8s where
you could reserver `cpu=300millis`
##########
providers/src/airflow/providers/edge/provider.yaml:
##########
@@ -78,9 +78,9 @@ config:
type: integer
example: "10"
default: "30"
- worker_concurrency:
+ worker_capacity:
description: |
- The concurrency that will be used when starting workers with the
+ The capacity that will be used when starting workers with the
``airflow edge worker`` command. This defines the number of task
instances that
a worker will take, so size up your workers based on the resources on
Review Comment:
The two lines are not fitting as description to desired change anymore. Can
you please re-phrase?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]