HuangXingBo opened a new pull request #14628:
URL: https://github.com/apache/flink/pull/14628
## What is the purpose of the change
*This pull request Optimize State with Cross Bundle State Cache In PyFlink*
## Brief change log
- *Let StateRequestHandler for user state only use a single cache token*
## Verifying this change
- *Original Tests*
## Does this pull request potentially affect one of the following parts:
- Dependencies (does it add or upgrade a dependency): (no)
- The public API, i.e., is any changed class annotated with
`@Public(Evolving)`: (no)
- The serializers: (no)
- The runtime per-record code paths (performance sensitive): (no)
- Anything that affects deployment or recovery: JobManager (and its
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
- The S3 file system connector: (no)
## Documentation
- Does this pull request introduce a new feature? (yes / no)
- If yes, how is the feature documented? (not applicable / docs / JavaDocs
/ not documented)
class MeanAggregateFunction(AggregateFunction):
def get_value(self, accumulator: ACC) -> T:
if accumulator[1] == 0:
return None
else:
return accumulator[0] / accumulator[1]
def create_accumulator(self) -> ACC:
return [0, 0]
def accumulate(self, accumulator: ACC, *args):
accumulator[0] += args[0]
accumulator[1] += 1
def retract(self, accumulator: ACC, *args):
accumulator[0] -= args[0]
accumulator[1] -= 1
def merge(self, accumulator: ACC, accumulators):
for other_acc in accumulators:
accumulator[0] += other_acc[0]
accumulator[1] += other_acc[1]
def get_accumulator_type(self) -> DataType:
return DataTypes.ARRAY(DataTypes.BIGINT())
def get_result_type(self) -> DataType:
return DataTypes.FLOAT()
### Test Code
env = StreamExecutionEnvironment.get_execution_environment()
env.set_parallelism(1)
env.set_stream_time_characteristic(TimeCharacteristic.EventTime)
environment_settings =
EnvironmentSettings.new_instance().use_blink_planner().build()
t_env = StreamTableEnvironment.create(env,
environment_settings=environment_settings)
t_env.get_config().get_configuration().set_integer("python.fn-execution.bundle.time",
1000)
t_env.get_config().get_configuration().set_boolean("pipeline.object-reuse",
True)
t_env.create_temporary_function("python_avg", MeanAggregateFunction())
t_env.create_java_temporary_system_function("java_avg",
"com.alibaba.flink.function.JavaAvg")
num_rows = 10000000
t_env.execute_sql(f"""
CREATE TABLE source (
id INT,
num INT,
rowtime TIMESTAMP(3),
WATERMARK FOR rowtime AS rowtime - INTERVAL '60' MINUTE
) WITH (
'connector' = 'Range',
'start' = '1',
'end' = '{num_rows}',
'step' = '1',
'partition' = '200'
)
""")
t_env.register_table_sink(
"sink",
PrintTableSink(
["num", "value"],
[DataTypes.INT(False), DataTypes.FLOAT(False)], 1000000))
result = t_env.from_path("source") \
.select("num % 1000 as num, id") \
.group_by("num") \
.select("num, python_avg(id)")
result.insert_into("sink")
beg_time = time.time()
t_env.execute("Python UDF")
print("PyFlink stream group agg consume time: " + str(time.time() -
beg_time))
## Test Results
num rows, num colums | Consume Time(Before) | Consume Time(After)
1000w,3 | 113.30s |
85.93s
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]