zentol commented on a change in pull request #12558:
URL: https://github.com/apache/flink/pull/12558#discussion_r438098945



##########
File path: docs/ops/deployment/docker.md
##########
@@ -264,10 +268,14 @@ There are several ways in which you can further customize 
the Flink image:
 
 * install custom software (e.g. python)
 * enable (symlink) optional libraries or plugins from `/opt/flink/opt` into 
`/opt/flink/lib` or `/opt/flink/plugins`
-* add other libraries to `/opt/flink/lib` (e.g. 
[hadoop](hadoop.html#adding-hadoop-to-lib))
+* add other libraries to `/opt/flink/lib` (e.g. [hadoop](#extend-with-maven))
 * add other plugins to `/opt/flink/plugins`
 
-you can achieve this in several ways:
+All files in the `/opt/flink/lib/` folder are added to the classpath used to 
start Flink. It is suitable for libraries such as Hadoop or file systems not 
available as plugins.

Review comment:
       This seems like information that is better served in some other place, 
explaining the structure of flink-dist, that we just link to.

##########
File path: docs/ops/deployment/native_kubernetes.md
##########
@@ -116,10 +116,12 @@ $ kubectl port-forward service/<ServiceName> 8081
 - `NodePort`: Exposes the service on each Node’s IP at a static port (the 
`NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager 
Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
 You could find it in your kube config file.
 
-- `LoadBalancer`: Default value, exposes the service externally using a cloud 
provider’s load balancer.
+- `LoadBalancer`: **Default value**, exposes the service externally using a 
cloud provider’s load balancer.

Review comment:
       Link to 
`https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#kubernetes-rest-service-exposed-type`
 instead so we don't risk being out-of-sync with the actual defaults.
   
   Note also that the option key documented here seems to be incorrect; it 
should be
   `kubernetes.rest-service.exposed.type`.

##########
File path: docs/ops/deployment/docker.md
##########
@@ -319,13 +329,57 @@ as described in [how to run the Flink 
image](#how-to-run-flink-image).
     ENV VAR_NAME value
     ```
 
+    **Commands for building**:
+
     ```sh
     docker build -t custom_flink_image .
     # optional push to your docker image registry if you have it,
     # e.g. to distribute the custom image to your cluster
     docker push custom_flink_image
     ```
 
+    <a name="extend-with-maven"></a>
+    If you need to **extend the Flink image with a Maven dependency (and its 
transitive dependencies)**,
+you can use a Maven *pom.xml* file such as:
+
+   *pom.xml*:
+
+    ```xml
+    <?xml version="1.0" encoding="UTF-8"?>
+    <project xmlns="http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+      xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+      <modelVersion>4.0.0</modelVersion>
+
+      <groupId>org.apache.flink</groupId>
+      <artifactId>docker-dependencies</artifactId>

Review comment:
       should we really be informing users on how to download stuff from maven?

##########
File path: docs/ops/deployment/docker.md
##########
@@ -264,10 +268,14 @@ There are several ways in which you can further customize 
the Flink image:
 
 * install custom software (e.g. python)
 * enable (symlink) optional libraries or plugins from `/opt/flink/opt` into 
`/opt/flink/lib` or `/opt/flink/plugins`
-* add other libraries to `/opt/flink/lib` (e.g. 
[hadoop](hadoop.html#adding-hadoop-to-lib))
+* add other libraries to `/opt/flink/lib` (e.g. [hadoop](#extend-with-maven))
 * add other plugins to `/opt/flink/plugins`
 
-you can achieve this in several ways:
+All files in the `/opt/flink/lib/` folder are added to the classpath used to 
start Flink. It is suitable for libraries such as Hadoop or file systems not 
available as plugins.
+
+Files in the `/opt/flink/plugins/` are loaded at runtime by Flink through 
separate classloaders to avoid conflicts with classes loaded and used by Flink. 
Only jar files which are prepared as [plugins]({{ site.baseurl 
}}/ops/plugins.html) can be added here.

Review comment:
       ```suggestion
   Files in the `/opt/flink/plugins/` directory are loaded at runtime by Flink 
through separate classloaders to avoid conflicts with classes loaded and used 
by Flink. Only jar files which are prepared as [plugins]({{ site.baseurl 
}}/ops/plugins.html) can be added here.
   ```

##########
File path: docs/ops/deployment/docker.md
##########
@@ -264,10 +268,14 @@ There are several ways in which you can further customize 
the Flink image:
 
 * install custom software (e.g. python)
 * enable (symlink) optional libraries or plugins from `/opt/flink/opt` into 
`/opt/flink/lib` or `/opt/flink/plugins`
-* add other libraries to `/opt/flink/lib` (e.g. 
[hadoop](hadoop.html#adding-hadoop-to-lib))
+* add other libraries to `/opt/flink/lib` (e.g. [hadoop](#extend-with-maven))
 * add other plugins to `/opt/flink/plugins`
 
-you can achieve this in several ways:
+All files in the `/opt/flink/lib/` folder are added to the classpath used to 
start Flink. It is suitable for libraries such as Hadoop or file systems not 
available as plugins.
+
+Files in the `/opt/flink/plugins/` are loaded at runtime by Flink through 
separate classloaders to avoid conflicts with classes loaded and used by Flink. 
Only jar files which are prepared as [plugins]({{ site.baseurl 
}}/ops/plugins.html) can be added here.

Review comment:
       This information either duplicates or belongs into the plugins 
documentation.

##########
File path: docs/ops/deployment/docker.md
##########
@@ -264,10 +268,14 @@ There are several ways in which you can further customize 
the Flink image:
 
 * install custom software (e.g. python)
 * enable (symlink) optional libraries or plugins from `/opt/flink/opt` into 
`/opt/flink/lib` or `/opt/flink/plugins`
-* add other libraries to `/opt/flink/lib` (e.g. 
[hadoop](hadoop.html#adding-hadoop-to-lib))
+* add other libraries to `/opt/flink/lib` (e.g. [hadoop](#extend-with-maven))
 * add other plugins to `/opt/flink/plugins`
 
-you can achieve this in several ways:
+All files in the `/opt/flink/lib/` folder are added to the classpath used to 
start Flink. It is suitable for libraries such as Hadoop or file systems not 
available as plugins.
+
+Files in the `/opt/flink/plugins/` are loaded at runtime by Flink through 
separate classloaders to avoid conflicts with classes loaded and used by Flink. 
Only jar files which are prepared as [plugins]({{ site.baseurl 
}}/ops/plugins.html) can be added here.

Review comment:
       The phrasing is also misleading since it conveys the idea that jars are 
put directly into the `plugins` directory, which doesn't work (needs a 
dedicated sub-directory).




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to