Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/13660#discussion_r67915708
--- Diff: docs/sparkr.md ---
@@ -262,6 +262,83 @@ head(df)
{% endhighlight %}
</div>
+### Applying User-defined Function
+In SparkR, we support several kinds for User-defined Functions:
+
+#### Run a given function on a large dataset using `dapply` or
`dapplyCollect`
+
+##### dapply
+Apply a function to each partition of `SparkDataFrame`. The function to be
applied to each partition of the `SparkDataFrame`
+and should have only one parameter, to which a `data.frame` corresponds to
each partition will be passed. The output of function
+should be a `data.frame`. Schema specifies the row format of the resulting
`SparkDataFrame`. It must match the R function's output.
+<div data-lang="r" markdown="1">
+{% highlight r %}
+
+# Convert waiting time from hours to seconds.
+# Note that we can apply UDF to DataFrame.
+schema <- structType(structField("eruptions", "double"),
structField("waiting", "double"),
+ structField("waiting_secs", "double"))
+df1 <- dapply(df, function(x) {x <- cbind(x, x$waiting * 60)}, schema)
+head(collect(df1))
+## eruptions waiting waiting_secs
+##1 3.600 79 4740
+##2 1.800 54 3240
+##3 3.333 74 4440
+##4 2.283 62 3720
+##5 4.533 85 5100
+##6 2.883 55 3300
+{% endhighlight %}
+</div>
+
+##### dapplyCollect
+Like `dapply`, apply a function to each partition of `SparkDataFrame` and
collect the result back. The output of function
+should be a `data.frame`. But, Schema is not required to be passed. Note
that `dapplyCollect` only can be used if the
+output of UDF run on all the partitions can fit in driver memory.
+<div data-lang="r" markdown="1">
+{% highlight r %}
+
+# Convert waiting time from hours to seconds.
+# Note that we can apply UDF to DataFrame and return a R's data.frame
+ldf <- dapplyCollect(
+ df,
+ function(x) {
+ x <- cbind(x, "waiting_secs"=x$waiting * 60)
+ })
+head(ldf, 3)
+## eruptions waiting waiting_secs
+##1 3.600 79 4740
+##2 1.800 54 3240
+##3 3.333 74 4440
+
+{% endhighlight %}
+</div>
+
+#### Run many functions in parallel using `spark.lapply`
+
+##### lapply
--- End diff --
I think spark.lapply is more about running native R distributed vs Dataset
would be typed data distributed (without native R), that seems fairly
orthogonal to me.
speaking of, how should we address local R here? should this say "Run local
R functions in parallel using spark.lapply" or "Run local R functions
distributed using spark.lapply"?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]