zhengcanbin opened a new pull request #11415: [FLINK-15667][k8s] Support to 
mount custom Hadoop Configurations
URL: https://github.com/apache/flink/pull/11415
 
 
   ## What is the purpose of the change
   
   This PR aims to add support for mounting custom Hadoop Configurations into 
the JM/TM Pods. We provide two options for the user to mount those 
configurations:
   - option 1: specify an existing ConfigMap that contains custom Hadoop 
Configurations, one can share a single ConfigMap for more than one Flink 
clusters.
   - option 2: create a dedicated ConfigMap containing Hadoop Configurations 
loaded from the local directory specified by the **HADOOP_CONF_DIR** or 
**HADOOP_HOME** environment, then bind that ConfigMap to the lifecycle of the 
new Flink cluster. 
   
   ## Brief change log
     - Introduce a new `KubernetesStepDecorator` implementation named 
`HadoopConfMountDecorator`.
     - Add `HadoopConfMountDecorator` to the decorator chains in 
`KubernetesJobManagerFactory` and `KubernetesTaskManagerFactory`.
     - Introduce a new config option `kubernetes.hadoop.conf.config-map.name`.
   
   
   ## Verifying this change
   
   This change added unit tests and additionally can be verified manually as 
follows:
     -  Specify an existing Hadoop Configuration ConfigMap when starts a new 
Flink cluster, make sure that the ConfigMap is mounted into the Pods. Then 
delete the Deployment and make sure that the existing ConfigMap is not deleted.
     -  Do not specify an existing ConfigMap, instead, export the 
HADOOP_CONF_DIR that contains Hadoop Configuration files when starts a new 
Flink cluster, make sure that a dedicated ConfigMap is created and mounted into 
the Pods. 
     -  Do not specify an existing ConfigMap, instead, export the HADOOP_HOME 
that contains Hadoop Configuration files when starts a new Flink cluster, make 
sure that a dedicated ConfigMap is created and mounted into the Pods. 
     -  Specify an existing ConfigMap and export HADOOP_CONF_DIR when starts a 
new Flink cluster, make sure that we do not create a dedicated Hadoop ConfigMap 
and use the existing one. 
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): (yes / **no**)
     - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
     - The serializers: (yes / **no** / don't know)
     - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
     - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
     - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
     - Does this pull request introduce a new feature? (**yes** / no)
     - If yes, how is the feature documented? (not applicable / **docs** / 
JavaDocs / not documented)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to