mateczagany opened a new pull request, #21788:
URL: https://github.com/apache/flink/pull/21788

     ## What is the purpose of the change
   
   Hadoop versions starting from 3.3.2 have reworked copying local files to 
remote using S3AFileSystem. Now it requires the passed local Hadoop Path to 
have a `scheme` specified.
   
   
   ## Brief change log
   
   YarnApplicationFileUploader will append `file://` scheme for local path if 
none specified
   
   
   ## Verifying this change
   
   I manually tested on a local single-node Hadoop 3.3.4 cluster with 
`fs.defaultFS` pointing to a local Minio S3 server. Creating a Flink YARN 
session failed before the fix, but worked after.
   
   Also added new unit test in `YarnApplicationFileUploaderTest`.
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): (yes / no) no
     - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no) no
     - The serializers: (yes / no / don't know) no
     - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know) no
     - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know) 
yes
     - The S3 file system connector: (yes / no / don't know) no
   
   ## Documentation
   
     - Does this pull request introduce a new feature? (yes / no) no
     - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented) not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to