xBis7 commented on code in PR #53722:
URL: https://github.com/apache/airflow/pull/53722#discussion_r2495639245


##########
airflow-core/src/airflow/jobs/scheduler_job_runner.py:
##########
@@ -2108,18 +2136,43 @@ def _emit_pool_metrics(self, session: Session = 
NEW_SESSION) -> None:
         with DebugTrace.start_span(span_name="emit_pool_metrics", 
component="SchedulerJobRunner") as span:
             pools = Pool.slots_stats(session=session)
             for pool_name, slot_stats in pools.items():
-                Stats.gauge(f"pool.open_slots.{pool_name}", slot_stats["open"])
-                Stats.gauge(f"pool.queued_slots.{pool_name}", 
slot_stats["queued"])
-                Stats.gauge(f"pool.running_slots.{pool_name}", 
slot_stats["running"])
-                Stats.gauge(f"pool.deferred_slots.{pool_name}", 
slot_stats["deferred"])
-                Stats.gauge(f"pool.scheduled_slots.{pool_name}", 
slot_stats["scheduled"])
-
-                # Same metrics with tagging
-                Stats.gauge("pool.open_slots", slot_stats["open"], 
tags={"pool_name": pool_name})
-                Stats.gauge("pool.queued_slots", slot_stats["queued"], 
tags={"pool_name": pool_name})
-                Stats.gauge("pool.running_slots", slot_stats["running"], 
tags={"pool_name": pool_name})
-                Stats.gauge("pool.deferred_slots", slot_stats["deferred"], 
tags={"pool_name": pool_name})
-                Stats.gauge("pool.scheduled_slots", slot_stats["scheduled"], 
tags={"pool_name": pool_name})
+                # If enabled on the config, publish metrics twice,
+                # once with backward compatible name, and then with tags.
+                DualStatsManager.gauge(
+                    f"pool.open_slots.{pool_name}",
+                    "pool.open_slots",
+                    slot_stats["open"],
+                    tags={},
+                    extra_tags={"pool_name": pool_name},
+                )

Review Comment:
   Right now I added a validation just for when reading the legacy metric from 
the yaml.
   
   A pre-commit check or a cross-check validation everytime we try to add a 
metric to the metric_map would be good to have. But I think that it should 
happen in another PR because it would widen the scope of this one by a lot.
   
   Just to get the tests working, I'm adding metrics that have been missing 
from the docs. The issue is that I'm not entirely sure about the description. 
I'm reading the code but still what I'm writing is a guess. So the validation 
PR, should start as a draft with a list of newly documented metrics so that 
people can add comments and explain what each does.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to