1u0 commented on a change in pull request #9169: [FLINK-12998][docs] Update 
documentation for file systems loading as plugins
URL: https://github.com/apache/flink/pull/9169#discussion_r307727897
 
 

 ##########
 File path: docs/ops/filesystems/index.md
 ##########
 @@ -35,32 +35,52 @@ File system instances are instantiated once per process 
and then cached/pooled,
 * This will be replaced by the TOC
 {:toc}
 
-## Built-in File Systems
+## Local File System
 
-Flink ships with implementations for the following file systems:
+Flink has built-in support for the file system of the local machine, including 
any NFS or SAN drives mounted into that local file system.
+It can be used by default without additional configuration. Local files are 
referenced with the *file://* URI scheme.
 
-  - **local**: This file system is used when the scheme is *"file://"*, and it 
represents the file system of the local machine, including any NFS or SAN 
drives mounted into that local file system.
+## Pluggable File Systems
 
-  - **S3**: Flink directly provides file systems to talk to Amazon S3 with two 
alternative implementations, `flink-s3-fs-presto` and `flink-s3-fs-hadoop`. 
Both implementations are self-contained with no dependency footprint.
-    
-  - **MapR FS**: The MapR file system *"maprfs://"* is automatically available 
when the MapR libraries are in the classpath.  
-  
-  - **OpenStack Swift FS**: Flink directly provides a file system to talk to 
the OpenStack Swift file system, registered under the scheme *"swift://"*. 
-  The implementation of `flink-swift-fs-hadoop` is based on the [Hadoop 
Project](https://hadoop.apache.org/) but is self-contained with no dependency 
footprint.
-  To use it when using Flink as a library, add the respective maven dependency 
(`org.apache.flink:flink-swift-fs-hadoop:{{ site.version }}`
-  When starting a Flink application from the Flink binaries, copy or move the 
respective jar file from the `opt` folder to the `lib` folder.
+Apache Flink project supports the following file systems as plugins:
+
+  - [**Amazon S3**](./s3.html) object storage is supported by two alternative 
implementations: `flink-s3-fs-presto` and `flink-s3-fs-hadoop`.
+  Both implementations are self-contained with no dependency footprint.
+
+  - **MapR FS** file system adapter is already supported in the main Flink 
distribution under the *maprfs://* URI scheme.
+  You must provide the MapR libraries in the classpath (for example in `lib` 
directory).
+
+  - **OpenStack Swift FS** is supported by `flink-swift-fs-hadoop` and 
registered under the *swift://* URI scheme.
+  The implementation is based on the [Hadoop 
Project](https://hadoop.apache.org/) but is self-contained with no dependency 
footprint.
+  To use it when using Flink as a library, add the respective maven dependency 
(`org.apache.flink:flink-swift-fs-hadoop:{{ site.version }}`).
   
-  - **Azure Blob Storage**: 
-    Flink directly provides a file system to work with Azure Blob Storage. 
-    This filesystem is registered under the scheme *"wasb(s)://"*.
-    The implementation is self-contained with no dependency footprint.
+  - **[Aliyun Object Storage Service](./oss.html)** is supported by 
`flink-oss-fs-hadoop` and registered under the *oss://* URI scheme.
+  The implementation is based on the [Hadoop 
Project](https://hadoop.apache.org/) but is self-contained with no dependency 
footprint.
+
+  - **[Azure Blob Storage](./azure.html)** is supported by 
`flink-azure-fs-hadoop` and registered under the *wasb(s)://* URI schemes.
+  The implementation is based on the [Hadoop 
Project](https://hadoop.apache.org/) but is self-contained with no dependency 
footprint.
+
+To use a pluggable file systems, copy the corresponding JAR file from the 
`opt` directory to a directory under `plugins` directory
+of your Flink distribution before starting Flink, e.g.
 
-## HDFS and Hadoop File System support 
+{% highlight bash %}
+mkdir ./plugins/s3-fs-hadoop
+cp ./opt/flink-s3-fs-hadoop-{{ site.version }}.jar ./plugins/s3-fs-hadoop/
+{% endhighlight %}
+
+<span class="label label-danger">Attention</span> The plugin mechanism for 
file systems was introduced in Flink version `1.9` to
+support dedicated Java class loaders per plugin and to move away from the 
class shading mechanism.
+You can still use the provided file systems (or your own implementations) via 
the old mechanism by copying the corresponding
+JAR file into `lib` directory.
+
+It's encouraged to use the plugin-based loading mechanism for file systems 
that support it. Loading file systems components from the `lib`
+directory may be not supported in the future.
 
 Review comment:
   I've applied the second part: `... supported in future Flink versions`.
   Using `may not` is not suitable here, as it means prohibiting something.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to