[
https://issues.apache.org/jira/browse/SPARK-6817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15095860#comment-15095860
]
Sun Rui commented on SPARK-6817:
--------------------------------
I agree R's efficiency comes from vectorization. Here UDF is a function can be
invoked in SQL queries, which is row-oriented. But row-orientation does not
necessarily means R UDF will process one row each time. Actually, projected
rows (according to the input parameters for a UDF) can be batched or even as a
whole in a partition (if no OOM is concerned) and then passed into an R worker.
The R worker can load the batch of rows into vectors or lists in memory and the
R UDF can still do vectorized operations.
Here the point is support of column-oriented UDF, which is something like UDAF,
but I doubt UDAF is not exact match, because UDAF only returns only one value
for a column. But in R, operations on a column may still return another
non-scalar column.
> DataFrame UDFs in R
> -------------------
>
> Key: SPARK-6817
> URL: https://issues.apache.org/jira/browse/SPARK-6817
> Project: Spark
> Issue Type: New Feature
> Components: SparkR, SQL
> Reporter: Shivaram Venkataraman
> Attachments: SparkR UDF Design Documentation v1.pdf
>
>
> This depends on some internal interface of Spark SQL, should be done after
> merging into Spark.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]