javacaoyu commented on a change in pull request #19126:
URL: https://github.com/apache/flink/pull/19126#discussion_r830861205
##########
File path: flink-python/pyflink/datastream/data_stream.py
##########
@@ -1174,6 +1174,66 @@ def process_element(self, value, ctx:
'KeyedProcessFunction.Context'):
return self.process(FilterKeyedProcessFunctionAdapter(func),
self._original_data_type_info)\
.name("Filter")
+ def sum(self, position_to_sum: Union[int, str]) -> 'DataStream':
+ """
+ Applies an aggregation that gives a rolling sum of the data stream at
the
+ given position grouped by the given key. An independent aggregate is
kept
+ per key.
+
+ Example(Tuple data to sum):
+ ::
+
+ >>> ds = env.from_collection([('a', 1), ('a', 2), ('b', 1), ('b',
5)])
+ >>> ds.key_by(lambda x: x[0]).sum(1)
+
+ Example(Row data to sum):
+ ::
+
+ >>> ds = env.from_collection([('a', 1), ('a', 2), ('a', 3), ('b',
1), ('b', 2)],
+ ...
type_info=Types.ROW([Types.STRING(), Types.INT()]))
+ >>> ds.key_by(lambda x: x[0]).sum(1)
+
+ Example(Row data with fields name to sum):
+ ::
+
+ >>> ds = env.from_collection(
+ ... [('a', 1), ('a', 2), ('a', 3), ('b', 1), ('b', 2)],
+ ... type_info=Types.ROW_NAMED(["key", "value"],
[Types.STRING(), Types.INT()])
+ ... )
+ >>> ds.key_by(lambda x: x[0]).sum("value")
+
+ :param position_to_sum:
+ The field position in the data points to sum, type can be int or
str.
+ This is applicable to Tuple types, and :class:`pyflink.common.Row`
types.
+ :return: The transformed DataStream.
+ """
+ if not isinstance(position_to_sum, int) and not
isinstance(position_to_sum, str):
+ raise TypeError("The input must be of int or str type to locate
the value to sum")
+
+ class SumReduceFunction(ReduceFunction):
+
+ def __init__(self, position_to_sum):
+ self._pos = position_to_sum
+
+ def reduce(self, value1, value2):
+ from numbers import Number
+ if not isinstance(value1[self._pos], Number):
Review comment:
Thanks for your advice.
I think there's no difference between using value1 or value2.
When we got single data input(The first input value), the reduce method will
not be called and its will assign the first value to reduce_state.
When we got 2 or more data, the reduce method is executed.
So, when the second data comes in, it's when the reduce method executes.
But I don't know what your considerations are, based on your considerations,
would you to give some advice to me?
Thanks
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]