allisonwang-db commented on code in PR #46234:
URL: https://github.com/apache/spark/pull/46234#discussion_r1583462883


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/pythonLogicalOperators.scala:
##########
@@ -274,6 +274,24 @@ case class ArrowEvalPythonUDTF(
     copy(child = newChild)
 }
 
+/**
+ * A logical plan that evaluates the 'analyze' method of a [[PythonUDTF]] on 
the executors.

Review Comment:
   Maybe this is beyond the scope of this PR, but could we add more 
comments/examples here on how a polymorphic UDTF can be transformed into a plan 
with this node where it's `analyze` method can be evaluated on executors?



##########
sql/core/src/main/scala/org/apache/spark/sql/execution/python/AnalyzePythonUDTFOnExecutorsExec.scala:
##########
@@ -0,0 +1,77 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.python
+
+import org.apache.spark.TaskContext
+import org.apache.spark.sql.catalyst.InternalRow
+import org.apache.spark.sql.catalyst.expressions.{Attribute, 
FunctionTableSubqueryArgumentExpression, GenericInternalRow, PythonUDTF, 
PythonUDTFAnalyzeResult, UnsafeProjection}
+import org.apache.spark.sql.catalyst.parser.CatalystSqlParser
+import org.apache.spark.sql.catalyst.util.GenericArrayData
+import org.apache.spark.sql.execution.SparkPlan
+import org.apache.spark.sql.execution.python.EvalPythonExec.ArgumentMetadata
+import org.apache.spark.sql.types.StructType
+import org.apache.spark.unsafe.types.UTF8String
+
+/**
+ * A physical plan that calls the 'analyze' method of a [[PythonUDTF]] on 
executors, when the
+ * ANALYZE_PYTHON_UDTF SQL function is called in a query.
+ *
+ * @param udtf the user-defined Python function
+ * @param requiredChildOutput the required output of the child plan. It's used 
for omitting data
+ *                            generation that will be discarded next by a 
projection.
+ * @param resultAttrs the output schema of the Python UDTF.
+ * @param child the child plan
+ */
+case class AnalyzePythonUDTFOnExecutorsExec(
+    udtf: PythonUDTF,
+    requiredChildOutput: Seq[Attribute],
+    resultAttrs: Seq[Attribute],
+    child: SparkPlan)
+  extends EvalPythonUDTFExec with PythonSQLMetrics {
+  override def withNewChildInternal(newChild: SparkPlan): 
AnalyzePythonUDTFOnExecutorsExec = {
+    copy(child = newChild)
+  }
+
+  override def evaluate(
+      argMetas: Array[ArgumentMetadata],
+      iter: Iterator[InternalRow],
+      schema: StructType,
+      context: TaskContext): Iterator[Iterator[InternalRow]] = {
+    val tableArgs = 
udtf.children.map(_.isInstanceOf[FunctionTableSubqueryArgumentExpression])
+    val parser = CatalystSqlParser
+    val runner = new UserDefinedPythonTableFunctionAnalyzeRunner(

Review Comment:
   This `UserDefinedPythonTableFunctionAnalyzeRunner` extends from the 
`PythonPlannerRunner`, which is designed to run Python processes on the driver 
side. Does it work out of the box on executors?  cc @ueshin 



##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/PythonUDF.scala:
##########
@@ -252,17 +261,24 @@ case class UnresolvedPolymorphicPythonUDTF(
  *                                 projection to evaluate these expressions 
and return the result to
  *                                 the UDTF. The UDTF then receives one input 
column for each
  *                                 expression in the list, in the order they 
are listed.
- * @param pickledAnalyzeResult this is the pickled 'AnalyzeResult' instance 
from the UDTF, which
- *                             contains all metadata returned by the Python 
UDTF 'analyze' method
- *                             including the result schema of the function 
call as well as optional
- *                             other information
+ * @param partitionByStrings String representations of the partitioning 
expressions before parsing.
+ * @param orderByStrings String representations of the ordering expressions 
before parsing.
+ * @param selectedInputStrings String representations of the selected input 
expressions before

Review Comment:
   why do we need the string representation here?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to