[ 
https://issues.apache.org/jira/browse/FLINK-24057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Kim updated FLINK-24057:
------------------------------
    Description: 
I'm trying use a CSV file on an S3 compliant object store and query through 
Flink SQL client.

As docs 
([https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/filesystems/s3/#hadooppresto-s3-file-systems-plugins])
 have stated, I have added the s3.access-key, s3.secret-key, s3.endpoint, and 
s3.path.style.access to flink-conf.yaml.

 

However, when I ran Flink SQL, created a table with  connector as filesystem, 
path as the s3a path and format as csv and run select * on the table, it hangs 
for couple minutes so I checked the logs and it gives me "INFO 
org.apache.flink.core.fs.FileSystem [] - Hadoop is not in the 
classpath/dependencies. The extended set of supported File Systems via Hadoop 
is not available". This should have been taken care of according to the docs by 
adding the Hadoop S3 File System plugins, however, it does not work as expected.

  was:
I'm trying use a CSV file on an S3 compliant object store and query through 
Flink SQL client.

As docs 
(https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/filesystems/s3/#hadooppresto-s3-file-systems-plugins)
 have stated, I have added the s3.access-key, s3.secret-key, s3.endpoint, and 
s3.path.style.access to flink-conf.yaml.

 

However, when I ran Flink SQL, created a table with  connector as filesystem, 
path as the s3a path and format as csv.

 

However, when I run select * on the table, it hangs for couple minutes so I 
checked the logs and it gives me "INFO org.apache.flink.core.fs.FileSystem [] - 
Hadoop is not in the classpath/dependencies. The extended set of supported File 
Systems via Hadoop is not available". This should have been taken care of 
according to the docs by adding the Hadoop S3 File System plugins, however, it 
does not work as expected.


> Flink SQL client Hadoop is not in the classpath/dependencies error even thugh 
> Hadoop S3 File system plugin was added
> --------------------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-24057
>                 URL: https://issues.apache.org/jira/browse/FLINK-24057
>             Project: Flink
>          Issue Type: Bug
>          Components: Table SQL / API, Table SQL / Client
>    Affects Versions: 1.13.2
>         Environment: VirtualBox Ubuntu 18.03 
>            Reporter: James Kim
>            Priority: Major
>
> I'm trying use a CSV file on an S3 compliant object store and query through 
> Flink SQL client.
> As docs 
> ([https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/filesystems/s3/#hadooppresto-s3-file-systems-plugins])
>  have stated, I have added the s3.access-key, s3.secret-key, s3.endpoint, and 
> s3.path.style.access to flink-conf.yaml.
>  
> However, when I ran Flink SQL, created a table with  connector as filesystem, 
> path as the s3a path and format as csv and run select * on the table, it 
> hangs for couple minutes so I checked the logs and it gives me "INFO 
> org.apache.flink.core.fs.FileSystem [] - Hadoop is not in the 
> classpath/dependencies. The extended set of supported File Systems via Hadoop 
> is not available". This should have been taken care of according to the docs 
> by adding the Hadoop S3 File System plugins, however, it does not work as 
> expected.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to