[
https://issues.apache.org/jira/browse/SPARK-10405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14726331#comment-14726331
]
ashish shenoy edited comment on SPARK-10405 at 9/1/15 10:39 PM:
----------------------------------------------------------------
[~srowen] yes, technically its a good to have not a must have. I can think of
many instances where such an API would be very convenient and useful for users
to use.
I was using the aggregateByKey() with a custom written bounded priority queue.
As per the spark documentation, the func param to foldByKey() should be an
associative merge function. So I can think of how this can be used to get the
max or min value per key, but not the top or bottom values. Since I am a
spark-newbie, can you pls give an example of how one could use a priorityQueue
with foldByKey() ?
Also, the default PriorityQueue implementation in java.util is unbounded; could
this cause OOM exceptions if the cardinality of the keyset is very large ?
was (Author: [email protected]):
[~srowen] yes, technically its a good to have not a must have. I could think of
many instances where such an API would be very convenient and useful for users
to have.
I was using the aggregateByKey() with a custom written bounded priority queue.
As per the spark documentation, the func param to foldByKey() should be an
associative merge function. So I can think of how this can be used to get the
max or min value per key, but not the top or bottom values. Since I am a
spark-noob, can you pls give an example of how one could use a priorityQueue
with foldByKey() ?
Also, the default PriorityQueue implementation in java.util is unbounded; could
this cause OOM exceptions if the cardinality of the keyset is very large ?
> Support takeOrdered and topK values per key
> -------------------------------------------
>
> Key: SPARK-10405
> URL: https://issues.apache.org/jira/browse/SPARK-10405
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Reporter: ashish shenoy
> Labels: features, newbie
>
> Spark provides the top() and takeOrdered() APIs that return "top" or "bottom"
> items from a given RDD.
> It'd be good to have an API that returned the "top" values per key for a
> keyed RDD i.e. RDDpair. Such an API would be very useful for cases where the
> task is to only display an ordered subset of the input data.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]