[
https://issues.apache.org/jira/browse/FLINK-10736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16686204#comment-16686204
]
ASF GitHub Bot commented on FLINK-10736:
----------------------------------------
dawidwys commented on a change in pull request #7077: [FLINK-10736][E2E tests]
Use already uploaded to s3 file in shaded s3 e2e tests
URL: https://github.com/apache/flink/pull/7077#discussion_r233352507
##########
File path: flink-end-to-end-tests/test-scripts/test_shaded_hadoop_s3a.sh
##########
@@ -22,15 +22,11 @@
source "$(dirname "$0")"/common.sh
source "$(dirname "$0")"/common_s3.sh
-s3_put $TEST_INFRA_DIR/test-data/words $ARTIFACTS_AWS_BUCKET
flink-end-to-end-test-shaded-s3a
-# make sure we delete the file at the end
-function shaded_s3a_cleanup {
- s3_delete $ARTIFACTS_AWS_BUCKET flink-end-to-end-test-shaded-s3a
-}
-trap shaded_s3a_cleanup EXIT
-
start_cluster
+bucket=flink-e2e-tests
Review comment:
Could we move the definition of `bucket` to `common_s3.sh`? Also I think we
should use uppercase here to follow the convention of constants like:
`TEST_INFRA_DIR`, `ARTIFACTS_AWS_BUCKET` etc.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
> Shaded Hadoop S3A end-to-end test failed on Travis
> --------------------------------------------------
>
> Key: FLINK-10736
> URL: https://issues.apache.org/jira/browse/FLINK-10736
> Project: Flink
> Issue Type: Bug
> Components: E2E Tests
> Affects Versions: 1.7.0
> Reporter: Till Rohrmann
> Assignee: Andrey Zagrebin
> Priority: Critical
> Labels: pull-request-available, test-stability
> Fix For: 1.7.0
>
>
> The {{Shaded Hadoop S3A end-to-end test}} failed on Travis because it could
> not find a file stored on S3:
> {code}
> org.apache.flink.client.program.ProgramInvocationException: Job failed.
> (JobID: f28270bedd943ed6b41548b60f5cea73)
> at
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:268)
> at
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:487)
> at
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:475)
> at
> org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:62)
> at
> org.apache.flink.examples.java.wordcount.WordCount.main(WordCount.java:85)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
> at
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
> at
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:427)
> at
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:813)
> at
> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:287)
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
> at
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050)
> at
> org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
> at
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job
> execution failed.
> at
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146)
> at
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:265)
> ... 21 more
> Caused by: java.io.IOException: Error opening the Input Split
> s3://[secure]/flink-end-to-end-test-shaded-s3a [0,44]: No such file or
> directory: s3://[secure]/flink-end-to-end-test-shaded-s3a
> at
> org.apache.flink.api.common.io.FileInputFormat.open(FileInputFormat.java:824)
> at
> org.apache.flink.api.common.io.DelimitedInputFormat.open(DelimitedInputFormat.java:470)
> at
> org.apache.flink.api.common.io.DelimitedInputFormat.open(DelimitedInputFormat.java:47)
> at
> org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:170)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:704)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.FileNotFoundException: No such file or directory:
> s3://[secure]/flink-end-to-end-test-shaded-s3a
> at
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:2255)
> at
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2149)
> at
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2088)
> at
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AFileSystem.open(S3AFileSystem.java:699)
> at
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.FileSystem.open(FileSystem.java:950)
> at
> org.apache.flink.fs.s3.common.hadoop.HadoopFileSystem.open(HadoopFileSystem.java:120)
> at
> org.apache.flink.fs.s3.common.hadoop.HadoopFileSystem.open(HadoopFileSystem.java:37)
> at
> org.apache.flink.api.common.io.FileInputFormat$InputSplitOpenThread.run(FileInputFormat.java:996)
> {code}
> https://api.travis-ci.org/v3/job/448770093/log.txt
> The solution could to harden this test case.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)