Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/4014#discussion_r23904403
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/ScriptTransformation.scala
---
@@ -25,9 +25,18 @@ import
org.apache.spark.sql.catalyst.expressions.{Attribute, Expression}
* @param input the set of expression that should be passed to the script.
* @param script the command that should be executed.
* @param output the attributes that are produced by the script.
+ * @param ioschema the input and output schema applied in the execution of
the script.
*/
case class ScriptTransformation(
input: Seq[Expression],
script: String,
output: Seq[Attribute],
- child: LogicalPlan) extends UnaryNode
+ child: LogicalPlan,
+ ioschema: Option[ScriptInputOutputSchema]) extends UnaryNode
+
+/**
+ * The wrapper class of input and output schema properties for
transforming with script.
--- End diff --
I'd phrase this as `A placeholder for implementation specific input and
output properties when passing data to a script. For example, in Hive this
would specify which SerDes to use`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]