HeartSaVioR commented on a change in pull request #31296:
URL: https://github.com/apache/spark/pull/31296#discussion_r564158012
##########
File path: sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala
##########
@@ -2007,6 +2007,54 @@ class DatasetSuite extends QueryTest
checkAnswer(withUDF, Row(Row(1), null, null) :: Row(Row(1), null, null) ::
Nil)
}
+
+ test("SPARK-34205: Pipe Dataset") {
+ assume(TestUtils.testCommandAvailable("cat"))
+
+ val nums = spark.range(4)
+ val piped = nums.pipe("cat", (l, printFunc) => printFunc(l.toString)).toDF
Review comment:
Great point! I don't know how exhaustive Spark implements the Hive's
transform feature, but the description in Hive's manual for transform looks
pretty much powerful, and much beyond on what we plan to provide with pipe.
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Transform#LanguageManualTransform-Transform/Map-ReduceSyntax
~Looks like the reason of absence of pipe in DataFrame is obvious -
transform just replaced it.~ (Not valid as it was only available for Hive
support) That looks to be only available in SQL statement so we still need DSL
support for using it in SS.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]