[
https://issues.apache.org/jira/browse/FLINK-1269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17323744#comment-17323744
]
Flink Jira Bot commented on FLINK-1269:
---------------------------------------
This issue is assigned but has not received an update in 7 days so it has been
labeled "stale-assigned". If you are still working on the issue, please give an
update and remove the label. If you are no longer working on the issue, please
unassign so someone else may work on it. In 7 days the issue will be
automatically unassigned.
> Easy way to "group count" dataset
> ---------------------------------
>
> Key: FLINK-1269
> URL: https://issues.apache.org/jira/browse/FLINK-1269
> Project: Flink
> Issue Type: New Feature
> Components: API / DataSet
> Affects Versions: 0.7.0-incubating
> Reporter: Sebastian Schelter
> Assignee: Suneel Marthi
> Priority: Major
> Labels: stale-assigned
>
> Flink should offer an easy way to group datasets and compute the sizes of the
> resulting groups. This is one of the most essential operations in distributed
> processing, yet it is very hard to implement in Flink.
> I assume it could be a show-stopper for people trying Flink, because at the
> moment, users have to perform the grouping and then write a groupReduce that
> counts the tuples in the group and extracts the group key at the same time.
> Here is what I would semantically expect to happen:
> {noformat}
> def groupCount[T, K](data: DataSet[T], extractKey: (T) => K): DataSet[(K,
> Long)] = {
> data.groupBy { extractKey }
> .reduceGroup { group => countBy(extractKey, group) }
> }
> private[this] def countBy[T, K](extractKey: T => K,
> group: Iterator[T]): (K, Long) = {
> val key = extractKey(group.next())
> var count = 1L
> while (group.hasNext) {
> group.next()
> count += 1
> }
> key -> count
> }
> {noformat}
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)