HyukjinKwon commented on a change in pull request #31296:
URL: https://github.com/apache/spark/pull/31296#discussion_r564138543



##########
File path: sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala
##########
@@ -2007,6 +2007,54 @@ class DatasetSuite extends QueryTest
 
     checkAnswer(withUDF, Row(Row(1), null, null) :: Row(Row(1), null, null) :: 
Nil)
   }
+
+  test("SPARK-34205: Pipe Dataset") {
+    assume(TestUtils.testCommandAvailable("cat"))
+
+    val nums = spark.range(4)
+    val piped = nums.pipe("cat", (l, printFunc) => printFunc(l.toString)).toDF

Review comment:
       @viirya, what do you think about we expose an `transform` equivalent 
expression exposed as DSL? e.g.)
   
   ```scala
   scala> val data = Seq((123, "first"), (4567, "second")).toDF("num", "word")
   data: org.apache.spark.sql.DataFrame = [num: int, word: string]
   
   scala> data.createOrReplaceTempView("t1")
   
   scala> sql("select transform(*) using 'cat' from t1").show()
   +----+------+
   | key| value|
   +----+------+
   | 123| first|
   |4567|second|
   +----+------+
   ```
   
   ```scala
   scala> data.repartition(1).createOrReplaceTempView("t1")
   
   scala> sql("select transform(*) using 'wc -l' as (echo) from t1").show()
   +--------+
   |    echo|
   +--------+
   |       2|
   +--------+
   ```
   
   Spark lately added the native support of script transformation, and I think 
it could do what you want.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to