AngersZhuuuu commented on a change in pull request #31296:
URL: https://github.com/apache/spark/pull/31296#discussion_r564390551



##########
File path: sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala
##########
@@ -2007,6 +2007,54 @@ class DatasetSuite extends QueryTest
 
     checkAnswer(withUDF, Row(Row(1), null, null) :: Row(Row(1), null, null) :: 
Nil)
   }
+
+  test("SPARK-34205: Pipe Dataset") {
+    assume(TestUtils.testCommandAvailable("cat"))
+
+    val nums = spark.range(4)
+    val piped = nums.pipe("cat", (l, printFunc) => printFunc(l.toString)).toDF

Review comment:
       > So in the end in order to use "TRANSFORM" for piping through external 
process for streaming Dataset, you will need a top-level API too...But the 
point of DSL is to avoid a top-level API. So...
   
   Since I did not participate in the community very early, so I don’t know 
much about the underlying design...
   Emmm IMO as a normal developer, we can have things that are useful to users.
   
   >  you will need a top-level API too...
   
   What's the top-level API,  you mean Plan node like `CollectSet` or other 
thing?
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to