Hello Pedroh! I am adding the jar under /opt/flink/plugins/s3-fs-hadoop inside a docker image. It's definitely not happening under the task manager and I don't believe it's happening under Job manager either. The error is coming from FlinkSessionJob under the Kubernetes' Custom Resource Definition (CRD).
The job manager itself actually registers the jar based on what I can see from the logs so I think the error is happening the JM even tries to registers the job. Typing this out just gave me a thought, is it possible that the Flink's CRDs API is using an image specified somewhere? Bryan Cantos (He / Him) Senior Software Engineer New York NY. (EST) <http://luminatedata.com/> On Mon, May 26, 2025, 3:54 PM Pedro Mázala <pedroh.maz...@gmail.com> wrote: > Hello there Bryan! > > It looks like Flink cannot find the s3 schema in your packages. How are > you adding the jars? Is the error happening on TM or on JM? > > > Att, > Pedro Mázala > Be awesome > > > On Thu, 22 May 2025 at 19:45, Bryan Cantos <bcan...@luminatedata.com> > wrote: > >> Hello, >> >> I have deployed the Flink Operator via helm chart ( >> https://github.com/apache/flink-kubernetes-operator) in our kubernetes >> cluster. I have a use case where we want to run ephemeral jobs so I created >> a FlinkDeployment and am trying to submit a job via FlinkSessionJob. I have >> sent example yaml files of each, they are very basic. I am using my own >> custom image based off of "flink:1.19-java17" where I am downloading the >> "flink-s3-fs-hadoop-1.19.2.jar" jar file under the directory >> "/opt/flink/plugins/s3-fs-hadoop" and under jarURI, I am pointing to a jar >> file in our s3 bucket. When the FlinkDeployment (under the name >> flink-cluster) pod is started, I see in the logs that the plugins are >> enabled. >> >> With all of this context, when I submit the FlinkSessionJob with the >> deploymentName "flink-cluster", I am getting this error message >> >> Could not find a file system implementation for scheme 's3'. The scheme >>> is directly supported by Flink through the following plugin(s): >>> flink-s3-fs-hadoop, flink-s3-fs-presto. Please ensure that each plugin >>> resides within its own subfolder within the plugins directory. See >>> https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/filesystems/plugins/ >>> for >>> more information. If you want to use a Hadoop file system for that scheme, >>> please add the scheme to the configuration fs.allowed-fallback-filesystems. >>> For a full list of supported file systems, please see >>> https://nightlies.apache.org/flink/flink-docs-stable/ops/filesystems/. >>> >> >> I attempted multiple different paths based on what I can find online, I >> even removed the jar and enabled the environment variable >> "ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-1.19.2.jar" but the result is >> the same. >> >> I am at a loss at this point, can someone help me? I would also >> appreciate an invite to the slack community if possible so we can discuss >> this more readily. >> >> Thank you in advance >> >> -- >> Bryan Cantos (He / Him) >> Senior Software Engineer >> New York NY. (EST) >> >> >> >> <http://luminatedata.com/> >> >