zhuzhurk commented on a change in pull request #10452: 
[FLINK-15035][table-planner-blink] Introduce unknown memory setting to table in 
blink planner
URL: https://github.com/apache/flink/pull/10452#discussion_r355105514
 
 

 ##########
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/exec/BatchExecNode.scala
 ##########
 @@ -32,4 +33,13 @@ trait BatchExecNode[T] extends ExecNode[BatchPlanner, T] 
with Logging {
     */
   def getDamBehavior: DamBehavior
 
+  def setManagedMemoryWeight[X](
+      transformation: Transformation[X], memoryBytes: Long): Transformation[X] 
= {
+    // Using Bytes can easily overflow
+    // Using MebiBytes to cast to int
+    // Careful about zero
+    val memoryMB = Math.max(1, (memoryBytes >> 20).toInt)
 
 Review comment:
   Just to confirm whether users' custom table sources/sinks could use managed 
memory?
   
   If custom table source/sink would not use any managed memory, table can 
always override those user operators' managed memory size/weight to 0. (table 
can see all operators in the StreamGraph and knows which operators are from 
table, the others should be custom source/sink operators I think)
   If custom table source/sink is possible to use managed memory, then table 
should guarantee that those custom operators would still be able to acquire 
manage memory, considering the competition from managed memory weights of table 
operators. Otherwise those source/sink tasks may fail.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to