Re: Shipping Filesystem Plugins with YarnClusterDescriptor

2020-06-11 Thread Kostas Kloudas
Hi John, I think that using different plugins is not going to be an issue, assuming that the scheme of your FS's do not collide. This is already the case for S3 within Flink, where we have 2 implementations, one based on Presto and one based on Hadoop. For the first you can use the scheme s3p

Re: Shipping Filesystem Plugins with YarnClusterDescriptor

2020-06-11 Thread John Mathews
So I think that will work, but it has some limitations. Namely, when launching clusters through a service (which is our use case), it can be the case that multiple different clients want clusters with different plugins or different versions of a given plugin, but because the FlinkClusterDescriptor

Re: Shipping Filesystem Plugins with YarnClusterDescriptor

2020-06-10 Thread Yangze Guo
Hi, John, AFAIK, Flink will automatically help you to ship the "plugins/" directory of your Flink distribution to Yarn[1]. So, you just need to make a directory in "plugins/" and put your custom jar into it. Do you meet any problem with this approach? [1]

Shipping Filesystem Plugins with YarnClusterDescriptor

2020-06-10 Thread John Mathews
Hello, I have a custom filesystem that I am trying to migrate to the plugins model described here: https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/#adding-a-new-pluggable-file-system-implementation, but it is unclear to me how to dynamically get the plugins directory to be