xintongsong commented on a change in pull request #9105: 
[FLINK-13241][Yarn/Mesos] Fix Yarn/MesosResourceManager setting managed memory 
size into wrong configuration instance.
URL: https://github.com/apache/flink/pull/9105#discussion_r304701115
 
 

 ##########
 File path: 
flink-mesos/src/main/java/org/apache/flink/mesos/runtime/clusterframework/MesosResourceManager.java
 ##########
 @@ -200,7 +200,7 @@ public MesosResourceManager(
 
                final ContaineredTaskManagerParameters 
containeredTaskManagerParameters = 
taskManagerParameters.containeredParameters();
                this.slotsPerWorker = 
updateTaskManagerConfigAndCreateWorkerSlotProfiles(
-                       flinkConfig, 
containeredTaskManagerParameters.taskManagerTotalMemoryMB(), 
containeredTaskManagerParameters.numSlots());
+                       this.flinkConfig, 
containeredTaskManagerParameters.taskManagerTotalMemoryMB(), 
containeredTaskManagerParameters.numSlots());
 
 Review comment:
   AFAIK, we don't have any IT case for Mesos, probably because we don't have a 
Mesos mini cluster. So I guess it's hard to check whether TMs are started with 
proper configurations just like we do for Yarn.
   
   We might have a `testCreateSlotsPerWorker` in `MesosResourceManagerTest` as 
well. When I first added this test case for Yarn, I thought it might be 
redundant to add the same test case for Mesos, because it's the same 
`createSlotsPerWorker` method of the common base class `ResourceManager` is 
invoked on both Yarn and Mesos. But now I feel, as you pointed out, that this 
should be useful to cover the case that `MesosResourceManager` may also update 
the wrong configuration instance.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to