beliefer commented on code in PR #39091:
URL: https://github.com/apache/spark/pull/39091#discussion_r1073397082
##########
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala:
##########
@@ -338,6 +340,22 @@ class SparkConnectPlanner(session: SparkSession) {
}
}
+ private def transformCollectMetrics(rel: proto.CollectMetrics): LogicalPlan
= {
+ val metrics = rel.getMetricsList.asScala.map { expr =>
+ Column(transformExpression(expr))
+ }
+
+ if (rel.getIsObservation) {
Review Comment:
> We don't need Observation here. We just need to send the observed metrics
as part of the response stream.
But we should maintain the consistency of behavior between the API of spark
connect and the API of pyspark. The observe of pyspark supports using
`Observation` as parameter and the doc test checks the consistence.
Maybe we could keep the API of connect supports `Observation` and it will
not be used at server side, but directly using `CollectMetrics`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]