swuferhong commented on code in PR #22966:
URL: https://github.com/apache/flink/pull/22966#discussion_r1257790505


##########
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/config/OptimizerConfigOptions.java:
##########
@@ -165,6 +167,51 @@ public class OptimizerConfigOptions {
                             "When it is true, the optimizer will try to push 
dynamic filtering into scan table source,"
                                     + " the irrelevant partitions or input 
data will be filtered to reduce scan I/O in runtime.");
 
+    @Documentation.TableOption(execMode = Documentation.ExecMode.BATCH)
+    public static final ConfigOption<Boolean> 
TABLE_OPTIMIZER_RUNTIME_FILTER_ENABLED =
+            key("table.optimizer.runtime-filter.enabled")
+                    .booleanType()
+                    .defaultValue(false)
+                    .withDescription(
+                            "A flag to enable or disable the runtime filter. "
+                                    + "When it is true, the optimizer will try 
to inject a runtime filter for eligible join.");
+
+    @Documentation.TableOption(execMode = Documentation.ExecMode.BATCH)
+    public static final ConfigOption<MemorySize>
+            TABLE_OPTIMIZER_RUNTIME_FILTER_MAX_BUILD_DATA_SIZE =
+                    key("table.optimizer.runtime-filter.max-build-data-size")
+                            .memoryType()
+                            .defaultValue(MemorySize.parse("10m"))

Review Comment:
   > What do you rely on to determine these default values? Is there any 
relevant trade off performance analysis? BTW, can you explain why we need the 
minimum and maximum values here?
   
   explain in comments.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to