[
https://issues.apache.org/jira/browse/FLINK-3477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15301061#comment-15301061
]
ASF GitHub Bot commented on FLINK-3477:
---------------------------------------
Github user fhueske commented on a diff in the pull request:
https://github.com/apache/flink/pull/1517#discussion_r64667955
--- Diff:
flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/RandomAccessInputView.java
---
@@ -45,7 +45,12 @@ public RandomAccessInputView(ArrayList<MemorySegment>
segments, int segmentSize)
{
this(segments, segmentSize, segmentSize);
}
-
+
+ public RandomAccessInputView(ArrayList<MemorySegment> segments, int
segmentSize, boolean dummy)
--- End diff --
This constructor is called twice from the constructor of the
`ReduceHashTable`. Once to initialize the input view of the `RecordArea` and
once for the `StagingArea`. Both areas will need at least one buffer. Maybe I
am wrong, but if we give both one initial buffer, we do not need this
additional constructor with the `dummy` flag.
As a second alternative to the constructor with the `dummy` flag, we could
also implement a constructor without the `ArrayList<MemorySegment>`, create it
in the constructor, and add a `getSegementList()` method to access the created
list.
What do you think, @ggevay?
> Add hash-based combine strategy for ReduceFunction
> --------------------------------------------------
>
> Key: FLINK-3477
> URL: https://issues.apache.org/jira/browse/FLINK-3477
> Project: Flink
> Issue Type: Sub-task
> Components: Local Runtime
> Reporter: Fabian Hueske
> Assignee: Gabor Gevay
>
> This issue is about adding a hash-based combine strategy for ReduceFunctions.
> The interface of the {{reduce()}} method is as follows:
> {code}
> public T reduce(T v1, T v2)
> {code}
> Input type and output type are identical and the function returns only a
> single value. A Reduce function is incrementally applied to compute a final
> aggregated value. This allows to hold the preaggregated value in a hash-table
> and update it with each function call.
> The hash-based strategy requires special implementation of an in-memory hash
> table. The hash table should support in place updates of elements (if the
> updated value has the same size as the new value) but also appending updates
> with invalidation of the old value (if the binary length of the new value
> differs). The hash table needs to be able to evict and emit all elements if
> it runs out-of-memory.
> We should also add {{HASH}} and {{SORT}} compiler hints to
> {{DataSet.reduce()}} and {{Grouping.reduce()}} to allow users to pick the
> execution strategy.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)