azagrebin opened a new pull request #6957: [FLINK-10627][E2E tests] Test s3 
output for streaming file sink
URL: https://github.com/apache/flink/pull/6957
 
 
   ## What is the purpose of the change
   
   This PR extends streaming file sink end-to-end test with the case of s3 
output.
   
   ## Brief change log
   
     - Add end-to-end test java utility module with s3 console tool.
     - Add common-s3.sh with s3 bash functions, move there previous s3 
functions from common.sh.
     - modify s3 shaded dependency tests to use common-s3.sh
     - Detect end of tests by number of lines in output files.
     - Use OUT_TYPE var to distinguish testing local and s3 output in streaming 
file sink end-to-end test.
     - Add s3 case to nightly tests.
   
   ## Verifying this change
   
   build flink and run:
   cd flink-end-to-end-tests
   FLINK_DIR=../build-target \
   ARTIFACTS_AWS_BUCKET=<test-bucket> \
   ARTIFACTS_AWS_ACCESS_KEY=<aws key> \
   ARTIFACTS_AWS_SECRET_KEY=<aws secret> \
   ./run-single-test.sh test-scripts/test_streaming_file_sink.sh <local|s3>
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): (yes)
     - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
     - The serializers: (no)
     - The runtime per-record code paths (performance sensitive): (no)
     - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
     - The S3 file system connector: (no)
   
   ## Documentation
   
     - Does this pull request introduce a new feature? (no)
     - If yes, how is the feature documented? (not applicable)
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to