[ 
https://issues.apache.org/jira/browse/FLINK-27750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dong updated FLINK-27750:
-------------------------
    Description: 
By constructing kubernetesClusterDescriptor and Fabric8FlinkKubeClient to 
deploy kubernetes application mode of job,The code is shown below.
{code:java}
//Initialize flinkConfiguration and set options including TOTAL_PROCESS_MEMORY
Configuration flinkConfiguration = GlobalConfiguration.loadConfiguration();
flinkConfiguration.set(DeploymentOptions.TARGET, 
KubernetesDeploymentTarget.APPLICATION.getName())
.set(PipelineOptions.JARS, Collections.singletonList(flinkDistJar))
.set(KubernetesConfigOptions.CLUSTER_ID, "APPLICATION1")
.set(KubernetesConfigOptions.CONTAINER_IMAGE, "img_url")
.set(KubernetesConfigOptions.CONTAINER_IMAGE_PULL_POLICY, 
KubernetesConfigOptions.ImagePullPolicy.Always)
.set(JobManagerOptions.TOTAL_PROCESS_MEMORY, MemorySize.parse("1024M"))
.set...;

//Construct kubernetesClusterDescriptor and Fabric8FlinkKubeClient
 KubernetesClusterDescriptor kubernetesClusterDescriptor = new 
KubernetesClusterDescriptor(
                        flinkConfiguration,
                        new Fabric8FlinkKubeClient(
                                flinkConfiguration,
                                new DefaultKubernetesClient(),
                                Executors.newFixedThreadPool(2)
                        )
                );
ApplicationConfiguration applicationConfiguration = new 
ApplicationConfiguration(execArgs, null);

//deploy kubernetes application mode of job
ClusterClient<String> clusterClient = 
kubernetesClusterDescriptor.deployApplicationCluster(
                        new 
ClusterSpecification.ClusterSpecificationBuilder().createClusterSpecification(),
                        applicationConfiguration
                ).getClusterClient();

String clusterId = clusterClient.getClusterId(); {code}
As above,I set TOTAL_PROCESS_MEMORY to 1024M,The flink UI displays the 
following memory configuration,which is clearly correct(448+128+256+192=1024).

!image-2022-05-24-14-00-39-255.png|width=759,height=255!

But when I turn to JobManager using {_}Kubectl Describe Deployment{_}, I found 
that the POD memory of JobManager was always 768M, which should have been equal 
to TOTAL_PROCESS_MEMORY 1024M. And no matter how I adjust TOTAL_PROCESS_MEMORY 
parameter it doesn't work.

!image-2022-05-24-14-18-30-063.png!

The result is a POD OOMkilled when JobManager memory usage exceeds 768M.

I expect the JobManager pod to be equal to TOTAL_PROCESS_MEMORY, so I can 
adjust the memory to suit my needs.

Is there something WRONG with my configuration, or should JobManager's pod take 
up the same amount of memory as TOTAL_PROCESS_MEMORY?

  was:
By constructing kubernetesClusterDescriptor and Fabric8FlinkKubeClient to 
deploy kubernetes application mode of job,The code is shown below.
{code:java}
//Initialize flinkConfiguration and set options including TOTAL_PROCESS_MEMORY
Configuration flinkConfiguration = GlobalConfiguration.loadConfiguration();
flinkConfiguration.set(DeploymentOptions.TARGET, 
KubernetesDeploymentTarget.APPLICATION.getName())
.set(PipelineOptions.JARS, Collections.singletonList(flinkDistJar))
.set(KubernetesConfigOptions.CLUSTER_ID, "APPLICATION1")
.set(KubernetesConfigOptions.CONTAINER_IMAGE, "img_url")
.set(KubernetesConfigOptions.CONTAINER_IMAGE_PULL_POLICY, 
KubernetesConfigOptions.ImagePullPolicy.Always)
.set(JobManagerOptions.TOTAL_PROCESS_MEMORY, MemorySize.parse("1024M"))
.set...;

//Construct kubernetesClusterDescriptor and Fabric8FlinkKubeClient
 KubernetesClusterDescriptor kubernetesClusterDescriptor = new 
KubernetesClusterDescriptor(
                        flinkConfiguration,
                        new Fabric8FlinkKubeClient(
                                flinkConfiguration,
                                new DefaultKubernetesClient(),
                                Executors.newFixedThreadPool(2)
                        )
                );
ApplicationConfiguration applicationConfiguration = new 
ApplicationConfiguration(execArgs, null);

//deploy kubernetes application mode of job
ClusterClient<String> clusterClient = 
kubernetesClusterDescriptor.deployApplicationCluster(
                        new 
ClusterSpecification.ClusterSpecificationBuilder().createClusterSpecification(),
                        applicationConfiguration
                ).getClusterClient();

String clusterId = clusterClient.getClusterId(); {code}
As above,I set TOTAL_PROCESS_MEMORY to 1024M,The flink UI displays the 
following memory configuration,which is clearly correct(448+128+256+192=1024).

!image-2022-05-24-14-00-39-255.png|width=759,height=255!

But when I turn to JobManager using {_}Kubectl Describe Deployment{_}, I found 
that the POD memory of JobManager was always 768, which should have been equal 
to TOTAL_PROCESS_MEMORY 1024M. And no matter how I adjust TOTAL_PROCESS_MEMORY 
parameter it doesn't work.

!image-2022-05-24-14-18-30-063.png!

The result is a POD OOMkilled when JobManager memory usage exceeds 768M.

I expect the JobManager pod to be equal to TOTAL_PROCESS_MEMORY, so I can 
adjust the memory to suit my needs.

Is there something WRONG with my configuration, or should JobManager's pod take 
up the same amount of memory as TOTAL_PROCESS_MEMORY?


> The configuration of 
> JobManagerOption.TOTAL_PROCESS_MEMORY(jobmanager.memory.process.size) not work
> ---------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-27750
>                 URL: https://issues.apache.org/jira/browse/FLINK-27750
>             Project: Flink
>          Issue Type: Bug
>          Components: Deployment / Kubernetes
>    Affects Versions: 1.14.4
>            Reporter: dong
>            Priority: Major
>              Labels: TOTAL_PROCESS_MEMORY, jobmanager.memory.process.size
>         Attachments: image-2022-05-24-14-00-39-255.png, 
> image-2022-05-24-14-18-30-063.png
>
>
> By constructing kubernetesClusterDescriptor and Fabric8FlinkKubeClient to 
> deploy kubernetes application mode of job,The code is shown below.
> {code:java}
> //Initialize flinkConfiguration and set options including TOTAL_PROCESS_MEMORY
> Configuration flinkConfiguration = GlobalConfiguration.loadConfiguration();
> flinkConfiguration.set(DeploymentOptions.TARGET, 
> KubernetesDeploymentTarget.APPLICATION.getName())
> .set(PipelineOptions.JARS, Collections.singletonList(flinkDistJar))
> .set(KubernetesConfigOptions.CLUSTER_ID, "APPLICATION1")
> .set(KubernetesConfigOptions.CONTAINER_IMAGE, "img_url")
> .set(KubernetesConfigOptions.CONTAINER_IMAGE_PULL_POLICY, 
> KubernetesConfigOptions.ImagePullPolicy.Always)
> .set(JobManagerOptions.TOTAL_PROCESS_MEMORY, MemorySize.parse("1024M"))
> .set...;
> //Construct kubernetesClusterDescriptor and Fabric8FlinkKubeClient
>  KubernetesClusterDescriptor kubernetesClusterDescriptor = new 
> KubernetesClusterDescriptor(
>                         flinkConfiguration,
>                         new Fabric8FlinkKubeClient(
>                                 flinkConfiguration,
>                                 new DefaultKubernetesClient(),
>                                 Executors.newFixedThreadPool(2)
>                         )
>                 );
> ApplicationConfiguration applicationConfiguration = new 
> ApplicationConfiguration(execArgs, null);
> //deploy kubernetes application mode of job
> ClusterClient<String> clusterClient = 
> kubernetesClusterDescriptor.deployApplicationCluster(
>                         new 
> ClusterSpecification.ClusterSpecificationBuilder().createClusterSpecification(),
>                         applicationConfiguration
>                 ).getClusterClient();
> String clusterId = clusterClient.getClusterId(); {code}
> As above,I set TOTAL_PROCESS_MEMORY to 1024M,The flink UI displays the 
> following memory configuration,which is clearly correct(448+128+256+192=1024).
> !image-2022-05-24-14-00-39-255.png|width=759,height=255!
> But when I turn to JobManager using {_}Kubectl Describe Deployment{_}, I 
> found that the POD memory of JobManager was always 768M, which should have 
> been equal to TOTAL_PROCESS_MEMORY 1024M. And no matter how I adjust 
> TOTAL_PROCESS_MEMORY parameter it doesn't work.
> !image-2022-05-24-14-18-30-063.png!
> The result is a POD OOMkilled when JobManager memory usage exceeds 768M.
> I expect the JobManager pod to be equal to TOTAL_PROCESS_MEMORY, so I can 
> adjust the memory to suit my needs.
> Is there something WRONG with my configuration, or should JobManager's pod 
> take up the same amount of memory as TOTAL_PROCESS_MEMORY?



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to