viirya commented on a change in pull request #31296:
URL: https://github.com/apache/spark/pull/31296#discussion_r564169518
##########
File path: sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala
##########
@@ -2007,6 +2007,54 @@ class DatasetSuite extends QueryTest
checkAnswer(withUDF, Row(Row(1), null, null) :: Row(Row(1), null, null) ::
Nil)
}
+
+ test("SPARK-34205: Pipe Dataset") {
+ assume(TestUtils.testCommandAvailable("cat"))
+
+ val nums = spark.range(4)
+ val piped = nums.pipe("cat", (l, printFunc) => printFunc(l.toString)).toDF
Review comment:
I considered `transform` at the beginning as it looks close to pipe. I
don't pick it for this because I only see it is exposed as SQL syntax and I am
not sure if it works for streaming Dataset? Another reason is that it is
designed for untyped Dataset. So if you want to pipe complex object T with
custom output instead of column-wise output, "transform" isn't as powerful as
"pipe".
Although I asked our customer and they only use primitive type Dataset for
now. So untyped Dataset should be enough for the purpose.
Another reason is although the query looks like "SELECT TRANSFORM(...) FROM
...", it is actually not an expression but implemented as an operator.
Unlike Window function, it seems to me that we cannot have a query like
"SELECT a, TRANSFORM(...), c FROM ..." or in DSL format like:
```scala
df.select($"a", $"b", transform(...) ...)
```
But for Window function we can do:
```scala
df.select($"a", $"b", lead("key", 1).over(window) ...)
```
##########
File path: sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala
##########
@@ -2007,6 +2007,54 @@ class DatasetSuite extends QueryTest
checkAnswer(withUDF, Row(Row(1), null, null) :: Row(Row(1), null, null) ::
Nil)
}
+
+ test("SPARK-34205: Pipe Dataset") {
+ assume(TestUtils.testCommandAvailable("cat"))
+
+ val nums = spark.range(4)
+ val piped = nums.pipe("cat", (l, printFunc) => printFunc(l.toString)).toDF
Review comment:
I considered `transform` at the beginning as it looks close to pipe. I
don't pick it for this because I only see it is exposed as SQL syntax and I am
not sure if it works for streaming Dataset? Another reason is that it is
designed for untyped Dataset. So if you want to pipe complex object T with
custom output instead of column-wise output, "transform" isn't as powerful as
"pipe".
Although I asked our customer and they only use primitive type Dataset for
now. So untyped Dataset should be enough for the purpose.
Another reason is although the query looks like "SELECT TRANSFORM(...) FROM
...", it is actually not an expression but implemented as an operator. If we
have it as DSL expression, there will be some problems.
Unlike Window function, it seems to me that we cannot have a query like
"SELECT a, TRANSFORM(...), c FROM ..." or in DSL format like:
```scala
df.select($"a", $"b", transform(...) ...)
```
But for Window function we can do:
```scala
df.select($"a", $"b", lead("key", 1).over(window) ...)
```
##########
File path: sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala
##########
@@ -2007,6 +2007,54 @@ class DatasetSuite extends QueryTest
checkAnswer(withUDF, Row(Row(1), null, null) :: Row(Row(1), null, null) ::
Nil)
}
+
+ test("SPARK-34205: Pipe Dataset") {
+ assume(TestUtils.testCommandAvailable("cat"))
+
+ val nums = spark.range(4)
+ val piped = nums.pipe("cat", (l, printFunc) => printFunc(l.toString)).toDF
Review comment:
I considered `transform` at the beginning as it looks close to pipe. I
don't pick it for this because I only see it is exposed as SQL syntax and I am
not sure if it works for streaming Dataset? Another reason is that it is
designed for untyped Dataset. So if you want to pipe complex object T with
custom output instead of column-wise output, "transform" isn't as powerful as
"pipe".
Although I asked our customer and they only use primitive type Dataset for
now. So untyped Dataset should be enough for the purpose.
Another reason is although the query looks like "SELECT TRANSFORM(...) FROM
...", it is actually not an expression but implemented as an operator. If we
have it as DSL expression, there will be some problems.
Unlike Window function, it seems to me that we cannot have a query like
"SELECT a, TRANSFORM(...), c FROM ..." or in DSL format like:
```scala
df.select($"a", $"b", transform(...) ...)
```
But for Window function we can do:
```scala
df.select($"a", $"b", lead("key", 1).over(window) ...)
```
That being said, in the end it is also `Dataset.transform`, instead of an
expression DSL.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]