[ 
https://issues.apache.org/jira/browse/FLINK-7905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16215460#comment-16215460
 ] 

ASF GitHub Bot commented on FLINK-7905:
---------------------------------------

GitHub user StephanEwen opened a pull request:

    https://github.com/apache/flink/pull/4892

     [FLINK-7905] [build] Update encrypted Travis S3 access keys

    This fixes the currently failing tests for S3 file systems by adding proper 
encrypted access credentials.
    
    The current credentials were deactivated.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/StephanEwen/incubator-flink hotfixes

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/flink/pull/4892.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #4892
    
----
commit e78c345baad8ceace64815a46e2d9f918db12cc0
Author: Stephan Ewen <se...@apache.org>
Date:   2017-10-15T18:54:01Z

    [hotfix] [tests] Increase stability of SavepointITCase

commit 5aafc1c11e337a7523852f046dbbda0804d599c2
Author: Stephan Ewen <se...@apache.org>
Date:   2017-10-23T16:50:03Z

    [FLINK-7905] [build] Update encrypted Travis S3 access keys

----


> HadoopS3FileSystemITCase failed on travis
> -----------------------------------------
>
>                 Key: FLINK-7905
>                 URL: https://issues.apache.org/jira/browse/FLINK-7905
>             Project: Flink
>          Issue Type: Bug
>          Components: FileSystem, Tests
>    Affects Versions: 1.4.0
>         Environment: https://travis-ci.org/zentol/flink/jobs/291550295
> https://travis-ci.org/tillrohrmann/flink/jobs/291491026
>            Reporter: Chesnay Schepler
>            Assignee: Stephan Ewen
>              Labels: test-stability
>
> The {{HadoopS3FileSystemITCase}} is flaky on Travis because its access got 
> denied by S3.
> {code}
> -------------------------------------------------------
>  T E S T S
> -------------------------------------------------------
> Running org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase
> Tests run: 3, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 3.354 sec <<< 
> FAILURE! - in org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase
> testDirectoryListing(org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase)  
> Time elapsed: 0.208 sec  <<< ERROR!
> java.nio.file.AccessDeniedException: 
> s3://[secure]/tests-9273972a-70c2-4f06-862e-d02936313fea/testdir: 
> getFileStatus on 
> s3://[secure]/tests-9273972a-70c2-4f06-862e-d02936313fea/testdir: 
> com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
> S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
> 9094999D7456C589), S3 Extended Request ID: 
> fVIcROQh4E1/GjWYYV6dFp851rjiKtFgNSCO8KkoTmxWbuxz67aDGqRiA/a09q7KS6Mz1Tnyab4=
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1579)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1249)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1030)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:742)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
>       at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
>       at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4194)
>       at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4141)
>       at 
> com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1256)
>       at 
> com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1232)
>       at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:904)
>       at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1553)
>       at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:117)
>       at 
> org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.getFileStatus(HadoopFileSystem.java:77)
>       at org.apache.flink.core.fs.FileSystem.exists(FileSystem.java:509)
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase.testDirectoryListing(HadoopS3FileSystemITCase.java:163)
> testSimpleFileWriteAndRead(org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase)
>   Time elapsed: 0.275 sec  <<< ERROR!
> java.nio.file.AccessDeniedException: 
> s3://[secure]/tests-9273972a-70c2-4f06-862e-d02936313fea/test.txt: 
> getFileStatus on 
> s3://[secure]/tests-9273972a-70c2-4f06-862e-d02936313fea/test.txt: 
> com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
> S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
> B3D8126BE6CF169F), S3 Extended Request ID: 
> T34sn+a/CcCFv+kFR/UbfozAkXXtiLDu2N31Ok5EydgKeJF5I2qXRCC/MkxSi4ymiiVWeSyb8FY=
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1579)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1249)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1030)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:742)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
>       at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
>       at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
>       at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4194)
>       at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4141)
>       at 
> com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1256)
>       at 
> com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1232)
>       at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:904)
>       at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1553)
>       at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.delete(S3AFileSystem.java:1234)
>       at 
> org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.delete(HadoopFileSystem.java:134)
>       at 
> org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase.testSimpleFileWriteAndRead(HadoopS3FileSystemITCase.java:147)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to