[ 
https://issues.apache.org/jira/browse/FLINK-32318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17732909#comment-17732909
 ] 

Luís Costa edited comment on FLINK-32318 at 6/15/23 8:06 AM:
-------------------------------------------------------------

Hi [~nateab] 

Yes, already did that. But the error remains.
{code:java}
kubectl exec -it flink-operator-8778bd969-2kxj5 -n flinkoperator /bin/bash

flink@flink-operator-8778bd969-2kxj5:/flink-kubernetes-operator$ ls -lrth 
/opt/flink/plugins/s3/
total 121M 
-rw-r--r-- 1 flink flink 92M Jun 14 17:19 flink-s3-fs-presto-1.16.2.jar 
-rw-r--r-- 1 flink flink 30M Jun 14 17:19 flink-s3-fs-hadoop-1.16.2.jar{code}


was (Author: JIRAUSER286937):
Hi [~nateab] 

Yes, already did that
{code:java}
kubectl exec -it flink-operator-8778bd969-2kxj5 -n flinkoperator /bin/bash

flink@flink-operator-8778bd969-2kxj5:/flink-kubernetes-operator$ ls -lrth 
/opt/flink/plugins/s3/
total 121M 
-rw-r--r-- 1 flink flink 92M Jun 14 17:19 flink-s3-fs-presto-1.16.2.jar 
-rw-r--r-- 1 flink flink 30M Jun 14 17:19 flink-s3-fs-hadoop-1.16.2.jar{code}

> [flink-operator] missing s3 plugin in folder plugins
> ----------------------------------------------------
>
>                 Key: FLINK-32318
>                 URL: https://issues.apache.org/jira/browse/FLINK-32318
>             Project: Flink
>          Issue Type: Bug
>          Components: Kubernetes Operator
>    Affects Versions: kubernetes-operator-1.5.0
>            Reporter: Luís Costa
>            Priority: Minor
>
> Greetings,
> I'm trying to configure [Flink's Kubernetes HA 
> services|https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/deployment/ha/kubernetes_ha/]
>  for flink operator jobs, but got an error regarding s3 plugin: _"Could not 
> find a file system implementation for scheme 's3'. The scheme is directly 
> supported by Flink through the following plugin(s): flink-s3-fs-hadoop, 
> flink-s3-fs-presto"_
> {code:java}
> 2023-06-12 10:05:16,981 INFO  akka.remote.Remoting                            
>              [] - Starting remoting
> 2023-06-12 10:05:17,194 INFO  akka.remote.Remoting                            
>              [] - Remoting started; listening on addresses 
> :[akka.tcp://flink@10.4.125.209:6123]
> 2023-06-12 10:05:17,377 INFO  
> org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils        [] - Actor 
> system started at akka.tcp://flink@10.4.125.209:6123
> 2023-06-12 10:05:18,175 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] - Shutting 
> KubernetesApplicationClusterEntrypoint down with application status FAILED. 
> Diagnostics org.apache.flink.util.FlinkException: Could not create the ha 
> services from the instantiated HighAvailabilityServicesFactory 
> org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory.
>       at 
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createCustomHAServices(HighAvailabilityServicesUtils.java:299)
>       at 
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createCustomHAServices(HighAvailabilityServicesUtils.java:285)
>       at 
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:145)
>       at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:439)
>       at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:382)
>       at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:282)
>       at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$1(ClusterEntrypoint.java:232)
>       at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
>       at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:229)
>       at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:729)
>       at 
> org.apache.flink.kubernetes.entrypoint.KubernetesApplicationClusterEntrypoint.main(KubernetesApplicationClusterEntrypoint.java:86)
> Caused by: java.io.IOException: Could not create FileSystem for highly 
> available storage path 
> (s3://td-infra-stg-us-east-1-s3-flinkoperator/flink-data/ha/flink-basic-example-xpto)
>       at 
> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:102)
>       at 
> org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:86)
>       at 
> org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory.createHAServices(KubernetesHaServicesFactory.java:41)
>       at 
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createCustomHAServices(HighAvailabilityServicesUtils.java:296)
>       ... 10 more Caused by: 
> org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find 
> a file system implementation for scheme 's3'. The scheme is directly 
> supported by Flink through the following plugin(s): flink-s3-fs-hadoop, 
> flink-s3-fs-presto. Please ensure that each plugin resides within its own 
> subfolder within the plugins directory. See 
> https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/filesystems/plugins/
>  for more information. If you want to use a Hadoop file system for that 
> scheme, please add the scheme to the configuration 
> fs.allowed-fallback-filesystems. For a full list of supported file systems, 
> please see 
> https://nightlies.apache.org/flink/flink-docs-stable/ops/filesystems/.
>     at 
> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:515)
>  ~[flink-dist-1.16.2.jar:1.16.2]
>     at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:409) 
> ~[flink-dist-1.16.2.jar:1.16.2]
>     at org.apache.flink.core.fs.Path.getFileSystem(Path.java:274) 
> ~[flink-dist-1.16.2.jar:1.16.2]
>     at 
> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:99)
>  ~[flink-dist-1.16.2.jar:1.16.2]
>     at 
> org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:86)
>  ~[flink-dist-1.16.2.jar:1.16.2]
>     at 
> org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory.createHAServices(KubernetesHaServicesFactory.java:41)
>  ~[flink-dist-1.16.2.jar:1.16.2]
>     at 
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createCustomHAServices(HighAvailabilityServicesUtils.java:296)
>  ~[flink-dist-1.16.2.jar:1.16.2]
>     at 
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createCustomHAServices(HighAvailabilityServicesUtils.java:285)
>  ~[flink-dist-1.16.2.jar:1.16.2]
>     at 
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:145)
>  ~[flink-dist-1.16.2.jar:1.16.2]
>     at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:439)
>  ~[flink-dist-1.16.2.jar:1.16.2]
>     at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:382)
>  ~[flink-dist-1.16.2.jar:1.16.2]
>     at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:282)
>  ~[flink-dist-1.16.2.jar:1.16.2]
>     at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$1(ClusterEntrypoint.java:232)
>  ~[flink-dist-1.16.2.jar:1.16.2]
>     at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
>  ~[flink-dist-1.16.2.jar:1.16.2]
>     at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:229)
>  ~[flink-dist-1.16.2.jar:1.16.2]{code}
> Looking into the job container, can see that s3 plugins are in folder 
> _/opt/flink/opt_ instead of {_}/opt/flink/plugins/s3{_}, as mentioned 
> [here|https://nightlies.apache.org/flink/flink-docs-stable/docs/deployment/filesystems/plugins/]
> {code:java}
> root@flink-basic-example-xpto1-86bb9b9d44-hksq8:/opt/flink# cd plugins/
> root@flink-basic-example-xpto1-86bb9b9d44-hksq8:/opt/flink/plugins# ls -la
> total 4
> drwxr-xr-x 10 flink flink 210 May 18 06:07 .
> drwxr-xr-x  1 flink flink  37 Jun 11 20:17 ..
> drwxr-xr-x  2 flink flink 114 May 18 06:07 external-resource-gpu
> drwxr-xr-x  2 flink flink  46 May 18 06:07 metrics-datadog
> drwxr-xr-x  2 flink flink  47 May 18 06:07 metrics-graphite
> drwxr-xr-x  2 flink flink  47 May 18 06:07 metrics-influx
> drwxr-xr-x  2 flink flink  42 May 18 06:07 metrics-jmx
> drwxr-xr-x  2 flink flink  49 May 18 06:07 metrics-prometheus
> drwxr-xr-x  2 flink flink  44 May 18 06:07 metrics-slf4j
> drwxr-xr-x  2 flink flink  45 May 18 06:07 metrics-statsd
> -rwxr-xr-x  1 flink flink 654 May 17 09:19 README.txt
> root@flink-basic-example-xpto1-86bb9b9d44-hksq8:/opt/flink/plugins# cd ..
> root@flink-basic-example-xpto1-86bb9b9d44-hksq8:/opt/flink# cd opt/
> root@flink-basic-example-xpto1-86bb9b9d44-hksq8:/opt/flink/opt# ls -la | grep 
> s3
> -rw-r--r-- 1 flink flink 30515842 May 18 06:00 flink-s3-fs-hadoop-1.16.2.jar
> -rw-r--r-- 1 flink flink 96171268 May 18 06:00 flink-s3-fs-presto-1.16.2.jar 
> {code}
> Also, looking into container flink-operator, did not find those s3 plugins
> Best regards,
> Luís Costa
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to