rmetzger commented on a change in pull request #14346:
URL: https://github.com/apache/flink/pull/14346#discussion_r540269120



##########
File path: docs/deployment/resource-providers/standalone/docker.md
##########
@@ -204,191 +198,25 @@ You can provide the following additional command line 
arguments to the cluster e
 
 If the main function of the user job main class accepts arguments, you can 
also pass them at the end of the `docker run` command.
 
-## Customize Flink image
-
-When you run the Flink containers, there may be a need to customize them.
-The next chapters describe some how-tos of what you can usually customize.
-
-### Configure options
-
-When you run Flink image, you can also change its configuration options by 
setting the environment variable `FLINK_PROPERTIES`:
-
-```sh
-FLINK_PROPERTIES="jobmanager.rpc.address: host
-taskmanager.numberOfTaskSlots: 3
-blob.server.port: 6124
-"
-docker run --env FLINK_PROPERTIES=${FLINK_PROPERTIES} flink:{% if 
site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else 
%}latest{% endif %} <jobmanager|standalone-job|taskmanager>
-```
-
-The [`jobmanager.rpc.address`]({% link deployment/config.md 
%}#jobmanager-rpc-address) option must be configured, others are optional to 
set.
-
-The environment variable `FLINK_PROPERTIES` should contain a list of Flink 
cluster configuration options separated by new line,
-the same way as in the `flink-conf.yaml`. `FLINK_PROPERTIES` takes precedence 
over configurations in `flink-conf.yaml`.
-
-### Provide custom configuration
-
-The configuration files (`flink-conf.yaml`, logging, hosts etc) are located in 
the `/opt/flink/conf` directory in the Flink image.
-To provide a custom location for the Flink configuration files, you can
-
-* **either mount a volume** with the custom configuration files to this path 
`/opt/flink/conf` when you run the Flink image:
-
-    ```sh
-    docker run \
-        --mount type=bind,src=/host/path/to/custom/conf,target=/opt/flink/conf 
\
-        flink:{% if site.is_stable 
%}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif 
%} <jobmanager|standalone-job|taskmanager>
-    ```
-
-* or add them to your **custom Flink image**, build and run it:
-
-    *Dockerfile*:
-
-    ```dockerfile
-    FROM flink
-    ADD /host/path/to/flink-conf.yaml /opt/flink/conf/flink-conf.yaml
-    ADD /host/path/to/log4j.properties /opt/flink/conf/log4j.properties
-    ```
-
-<span class="label label-warning">Warning!</span> The mounted volume must 
contain all necessary configuration files.
-The `flink-conf.yaml` file must have write permission so that the Docker entry 
point script can modify it in certain cases.
-
-### Using plugins
-
-As described in the [plugins]({% link deployment/filesystems/plugins.md %}) 
documentation page: in order to use plugins they must be
-copied to the correct location in the Flink installation in the Docker 
container for them to work.
-
-If you want to enable plugins provided with Flink (in the `opt/` directory of 
the Flink distribution), you can pass the environment variable 
`ENABLE_BUILT_IN_PLUGINS` when you run the Flink image.
-The `ENABLE_BUILT_IN_PLUGINS` should contain a list of plugin jar file names 
separated by `;`. A valid plugin name is for example 
`flink-s3-fs-hadoop-{{site.version}}.jar`
-
-```sh
-    docker run \
-        --env ENABLE_BUILT_IN_PLUGINS=flink-plugin1.jar;flink-plugin2.jar \
-        flink:{% if site.is_stable 
%}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif 
%} <jobmanager|standalone-job|taskmanager>
-```
-
-There are also more [advanced ways](#advanced-customization) for customizing 
the Flink image.
-
-### Switch memory allocator
-
-Flink introduced `jemalloc` as default memory allocator to resolve memory 
fragmentation problem (please refer to 
[FLINK-19125](https://issues.apache.org/jira/browse/FLINK-19125)).
-
-You could switch back to use `glibc` as memory allocator to restore the old 
behavior or if any unexpected memory consumption or problem observed
-(and please report the issue via JIRA or mailing list if you found any), by 
passing `disable-jemalloc` parameter:
-
-```sh
-    docker run <jobmanager|standalone-job|taskmanager> disable-jemalloc
-```
-
-### Advanced customization
-
-There are several ways in which you can further customize the Flink image:
-
-* install custom software (e.g. python)
-* enable (symlink) optional libraries or plugins from `/opt/flink/opt` into 
`/opt/flink/lib` or `/opt/flink/plugins`
-* add other libraries to `/opt/flink/lib` (e.g. Hadoop)
-* add other plugins to `/opt/flink/plugins`
-
-See also: [How to provide dependencies in the classpath]({% link index.md 
%}#how-to-provide-dependencies-in-the-classpath).
-
-You can customize the Flink image in several ways:
-
-* **override the container entry point** with a custom script where you can 
run any bootstrap actions.
-At the end you can call the standard `/docker-entrypoint.sh` script of the 
Flink image with the same arguments
-as described in [how to run the Flink image](#how-to-run-flink-image).
-
-  The following example creates a custom entry point script which enables more 
libraries and plugins.
-  The custom script, custom library and plugin are provided from a mounted 
volume.
-  Then it runs the standard entry point script of the Flink image:
-
-    ```sh
-    # create custom_lib.jar
-    # create custom_plugin.jar
-
-    echo "
-    ln -fs /opt/flink/opt/flink-queryable-state-runtime-*.jar /opt/flink/lib/. 
 # enable an optional library
-    ln -fs /mnt/custom_lib.jar /opt/flink/lib/.  # enable a custom library
-
-    mkdir -p /opt/flink/plugins/flink-s3-fs-hadoop
-    ln -fs /opt/flink/opt/flink-s3-fs-hadoop-*.jar 
/opt/flink/plugins/flink-s3-fs-hadoop/.  # enable an optional plugin
-
-    mkdir -p /opt/flink/plugins/custom_plugin
-    ln -fs /mnt/custom_plugin.jar /opt/flink/plugins/custom_plugin/.  # enable 
a custom plugin
-
-    /docker-entrypoint.sh <jobmanager|standalone-job|taskmanager>
-    " > custom_entry_point_script.sh
-
-    chmod 755 custom_entry_point_script.sh
-
-    docker run \
-        --mount type=bind,src=$(pwd),target=/mnt
-        flink:{% if site.is_stable 
%}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif 
%} /mnt/custom_entry_point_script.sh
-    ```
-
-* **extend the Flink image** by writing a custom `Dockerfile` and build a 
custom image:
 
-    *Dockerfile*:
+### Session Mode on Docker
 
-    ```dockerfile
-    FROM flink
+Local deployment in the session mode has already been described in the 
[introduction](#starting-a-session-cluster-on-docker) above.
 
-    RUN set -ex; apt-get update; apt-get -y install python
 
-    ADD /host/path/to/flink-conf.yaml 
/container/local/path/to/custom/conf/flink-conf.yaml
-    ADD /host/path/to/log4j.properties 
/container/local/path/to/custom/conf/log4j.properties
-
-    RUN ln -fs /opt/flink/opt/flink-queryable-state-runtime-*.jar 
/opt/flink/lib/.
-
-    RUN mkdir -p /opt/flink/plugins/flink-s3-fs-hadoop
-    RUN ln -fs /opt/flink/opt/flink-s3-fs-hadoop-*.jar 
/opt/flink/plugins/flink-s3-fs-hadoop/.
-
-    ENV VAR_NAME value
-    ```
-
-    **Commands for building**:
-
-    ```sh
-    docker build -t custom_flink_image .
-    # optional push to your docker image registry if you have it,
-    # e.g. to distribute the custom image to your cluster
-    docker push custom_flink_image
-    ```
-  
-### Enabling Python
-
-To build a custom image which has Python and Pyflink prepared, you can refer 
to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
-
-# install python3 and pip3
-RUN apt-get update -y && \
-apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf 
/var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
-
-Build the image named as **pyflink:latest**:
-
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
-
-{% top %}
-
-## Flink with Docker Compose
+### Flink with Docker Compose
 
 [Docker Compose](https://docs.docker.com/compose/) is a way to run a group of 
Docker containers locally.
-The next chapters show examples of configuration files to run Flink.
+The next sections show examples of configuration files to run Flink.
 
-### Usage
+#### Usage
 
 * Create the `yaml` files with the container configuration, check examples for:
-    * [Session cluster](#session-cluster-with-docker-compose)
-    * [Job cluster](#job-cluster-with-docker-compose)
+  * [Session cluster](#session-cluster-with-docker-compose)
+  * [Application cluster](#application-cluster-with-docker-compose)
 
-    See also [the Flink Docker image tags](#image-tags) and [how to customize 
the Flink Docker image](#advanced-customization)
-    for usage in the configuration files.
+  See also [the Flink Docker image tags](#image-tags) and [how to customize 
the Flink Docker image](#advanced-customization)
+  for usage in the configuration files.
 
 * Launch a cluster in the foreground

Review comment:
       I will shorten it a tiny bit




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to