viirya commented on a change in pull request #31296:
URL: https://github.com/apache/spark/pull/31296#discussion_r564382393



##########
File path: sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala
##########
@@ -2007,6 +2007,54 @@ class DatasetSuite extends QueryTest
 
     checkAnswer(withUDF, Row(Row(1), null, null) :: Row(Row(1), null, null) :: 
Nil)
   }
+
+  test("SPARK-34205: Pipe Dataset") {
+    assume(TestUtils.testCommandAvailable("cat"))
+
+    val nums = spark.range(4)
+    val piped = nums.pipe("cat", (l, printFunc) => printFunc(l.toString)).toDF

Review comment:
       Err..I don't think you get the discussed points above. Let me clarify 
it...
   
   @HyukjinKwon suggested to use "TRANSFORM" for the purpose of piping through 
external process, instead of adding "pipe" to Dataset API. The idea is 
basically to add DSL. But the problem is, "TRANSFORM" is not an expression and 
cannot be used in a DSL approach. So in the end in order to use "TRANSFORM" for 
piping through external process for streaming Dataset, you will need a 
top-level API too...But the point of DSL is to avoid a top-level API. So...
   
   
   
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to