xuanyuanking commented on a change in pull request #31296:
URL: https://github.com/apache/spark/pull/31296#discussion_r563011061
##########
File path: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala
##########
@@ -2889,6 +2889,24 @@ class Dataset[T] private[sql](
flatMap(func)(encoder)
}
+ /**
+ * Return a new Dataset of string created by piping elements to a forked
external process.
+ * The resulting Dataset is computed by executing the given process once per
partition.
+ * All elements of each input partition are written to a process's stdin as
lines of input
+ * separated by a newline. The resulting partition consists of the process's
stdout output, with
+ * each line of stdout resulting in one element of the output partition. A
process is invoked
+ * even for empty partitions.
+ *
+ * @param command command to run in forked process.
+ *
+ * @group typedrel
+ * @since 3.2.0
+ */
+ def pipe(command: String): Dataset[String] = {
Review comment:
An open question: should we expose other params in the API:
```
def pipe(
command: Seq[String],
env: Map[String, String] = Map(),
printPipeContext: (String => Unit) => Unit = null,
printRDDElement: (T, String => Unit) => Unit = null,
separateWorkingDir: Boolean = false,
bufferSize: Int = 8192,
encoding: String = Codec.defaultCharsetCodec.name): RDD[String]
```
I believe the `pipe(command: String)` should be the most common API. But I'm
not sure how many scenarios the other params are needed(seems the environment
variables are useful).
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]