[
https://issues.apache.org/jira/browse/FLINK-16400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17050298#comment-17050298
]
Robert Metzger edited comment on FLINK-16400 at 3/3/20 3:12 PM:
----------------------------------------------------------------
The same error also occurs in the {{YarnFileStageTestS3ITCase}}:
{code:java}
17:16:23.508 [INFO] Running org.apache.flink.yarn.YarnFileStageTestS3ITCase
17:16:29.337 [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time
elapsed: 5.826 s <<< FAILURE! - in
org.apache.flink.yarn.YarnFileStageTestS3ITCase
17:16:29.337 [ERROR]
testRecursiveUploadForYarnS3a(org.apache.flink.yarn.YarnFileStageTestS3ITCase)
Time elapsed: 0.071 s <<< ERROR!
org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a
file system implementation for scheme 's3a'. The scheme is directly supported
by Flink through the following plugin: flink-s3-fs-hadoop. Please ensure that
each plugin resides within its own subfolder within the plugins directory. See
https://ci.apache.org/projects/flink/flink-docs-stable/ops/plugins.html for
more information. If you want to use a Hadoop file system for that scheme,
please add the scheme to the configuration fs.allowed-fallback-filesystems. For
a full list of supported file systems, please see
https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/.
at
org.apache.flink.yarn.YarnFileStageTestS3ITCase.testRecursiveUploadForYarn(YarnFileStageTestS3ITCase.java:157)
at
org.apache.flink.yarn.YarnFileStageTestS3ITCase.testRecursiveUploadForYarnS3a(YarnFileStageTestS3ITCase.java:197)
17:16:29.368 [INFO]
17:16:29.368 [INFO] Results:
{code}
In this run: https://travis-ci.org/apache/flink/jobs/657296271
Interestingly, it does not surface on the same run on AZP:
https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5843&view=logs&j=c2f345e3-6738-50c0-333e-11265e9cd7e4&t=bfc49226-e770-5168-1d5a-8fe08e0d5386
It logs
{code}
2020-03-03T01:32:52.8933937Z [INFO] T E S T S
2020-03-03T01:32:52.8934558Z [INFO]
-------------------------------------------------------
2020-03-03T01:32:53.1954466Z [INFO] Running
org.apache.flink.yarn.YarnFileStageTestS3ITCase
2020-03-03T01:32:53.6854001Z [WARNING] Tests run: 1, Failures: 0, Errors: 0,
Skipped: 1, Time elapsed: 0.488 s - in
org.apache.flink.yarn.YarnFileStageTestS3ITCase
2020-03-03T01:32:54.0205161Z [INFO]
2020-03-03T01:32:54.0206010Z [INFO] Results:
{code}
... so it seems the test was skipped because the {{NativeS3FileSystem}} was not
in the classpath.
[~chesnay] Do you have an idea why this is happening?
The only difference I can see between these two tests is that on Travis, we are
using {{PROFILE="-Dhadoop.version=2.8.3 -Dinclude_hadoop_aws -Dscala-2.12
-Phive-1.2.1"}}, while on azure, it is {{PROFILE="-Dinclude-hadoop
-Dhadoop.version=2.8.3 -Dinclude_hadoop_aws -Dscala-2.12 -Phive-1.2.1"}}
(-Dinclude-hadoop is set on AZP).
was (Author: rmetzger):
The same error also occurs in the {{YarnFileStageTestS3ITCase}}:
{code:java}
17:16:23.508 [INFO] Running org.apache.flink.yarn.YarnFileStageTestS3ITCase
17:16:29.337 [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time
elapsed: 5.826 s <<< FAILURE! - in
org.apache.flink.yarn.YarnFileStageTestS3ITCase
17:16:29.337 [ERROR]
testRecursiveUploadForYarnS3a(org.apache.flink.yarn.YarnFileStageTestS3ITCase)
Time elapsed: 0.071 s <<< ERROR!
org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a
file system implementation for scheme 's3a'. The scheme is directly supported
by Flink through the following plugin: flink-s3-fs-hadoop. Please ensure that
each plugin resides within its own subfolder within the plugins directory. See
https://ci.apache.org/projects/flink/flink-docs-stable/ops/plugins.html for
more information. If you want to use a Hadoop file system for that scheme,
please add the scheme to the configuration fs.allowed-fallback-filesystems. For
a full list of supported file systems, please see
https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/.
at
org.apache.flink.yarn.YarnFileStageTestS3ITCase.testRecursiveUploadForYarn(YarnFileStageTestS3ITCase.java:157)
at
org.apache.flink.yarn.YarnFileStageTestS3ITCase.testRecursiveUploadForYarnS3a(YarnFileStageTestS3ITCase.java:197)
17:16:29.368 [INFO]
17:16:29.368 [INFO] Results:
{code}
In this run: https://travis-ci.org/apache/flink/jobs/657296271
Interestingly, it does not surface on the same run on AZP:
https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5843&view=logs&j=c2f345e3-6738-50c0-333e-11265e9cd7e4&t=bfc49226-e770-5168-1d5a-8fe08e0d5386
It logs
{code}
2020-03-03T01:32:52.8933937Z [INFO] T E S T S
2020-03-03T01:32:52.8934558Z [INFO]
-------------------------------------------------------
2020-03-03T01:32:53.1954466Z [INFO] Running
org.apache.flink.yarn.YarnFileStageTestS3ITCase
2020-03-03T01:32:53.6854001Z [WARNING] Tests run: 1, Failures: 0, Errors: 0,
Skipped: 1, Time elapsed: 0.488 s - in
org.apache.flink.yarn.YarnFileStageTestS3ITCase
2020-03-03T01:32:54.0205161Z [INFO]
2020-03-03T01:32:54.0206010Z [INFO] Results:
{code}
... so it seems the test was skipped because the {{NativeS3FileSystem}} was not
in the classpath.
[~chesnay] Do you have an idea why this is happening?
The only difference I can see between these two tests is that on Travis, we are
using {{PROFILE="-Dhadoop.version=2.8.3 -Dinclude_hadoop_aws -Dscala-2.12
-Phive-1.2.1"}}, while on azure, it is {{PROFILE="-Dinclude-hadoop
-Dhadoop.version=2.8.3 -Dinclude_hadoop_aws -Dscala-2.12 -Phive-1.2.1"}}
(-Dinclude-hadoop is set on AZP).
> HdfsKindTest.testS3Kind fails in Hadoop 2.4.1 nightly test
> ----------------------------------------------------------
>
> Key: FLINK-16400
> URL: https://issues.apache.org/jira/browse/FLINK-16400
> Project: Flink
> Issue Type: Bug
> Components: FileSystems, Tests
> Reporter: Robert Metzger
> Priority: Major
> Labels: test-stability
>
> Log:
> [https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5843&view=logs&j=f8cdcc9b-111a-5332-0026-209cb3eb5d15&t=57d35dc9-027e-5d4a-fbeb-1c24315e6ffb]
> and: [https://travis-ci.org/apache/flink/jobs/657296261]
> {code:java}
> 15:57:21.539 [ERROR] Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time
> elapsed: 0.291 s <<< FAILURE! - in
> org.apache.flink.runtime.fs.hdfs.HdfsKindTest
> 15:57:21.552 [ERROR]
> testS3Kind(org.apache.flink.runtime.fs.hdfs.HdfsKindTest) Time elapsed:
> 0.032 s <<< ERROR!
> org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find
> a file system implementation for scheme 's3'. The scheme is directly
> supported by Flink through the following plugins: flink-s3-fs-hadoop,
> flink-s3-fs-presto. Please ensure that each plugin resides within its own
> subfolder within the plugins directory. See
> https://ci.apache.org/projects/flink/flink-docs-stable/ops/plugins.html for
> more information. If you want to use a Hadoop file system for that scheme,
> please add the scheme to the configuration fs.allowed-fallback-filesystems.
> For a full list of supported file systems, please see
> https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/.
> at
> org.apache.flink.runtime.fs.hdfs.HdfsKindTest.testS3Kind(HdfsKindTest.java:57)
> 15:57:21.574 [INFO] Running
> org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriterOldHadoopWithNoTruncateSupportTest
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)