Samrat002 opened a new pull request, #21770:
URL: https://github.com/apache/flink/pull/21770

   
   
   ## What is the purpose of the change
   
   Currently, below are the ways Python Worker gets the Python Flink 
Dependencies.
   
   1. Worker Node's System Python Path 
(/usr/local/lib64/python3.7/site-packages)
   2. Client passes the python Dependencies through `-pyfs` and `--pyarch` 
which is localised into `PYTHONPATH` of Python Worker.
   3. Client passes the requirements through `-requirement.txt` which gets 
installed on Worker Node and added into `PYTHONPATH` of Python Worker.
   
   This change allow `PYTHONPATH` of Python Worker configurable where 
Admin/Service provider can install the required python Flink dependencies on a 
custom path (`/usr/lib/pyflink/lib/python3.7/site-packages`) on all Worker 
Nodes and then set the path in the client machine configuration 
`flink-conf.yaml`. This way it works without any configurations from the 
Application Users and also without affecting any other components dependent on 
System Python Path.
   
   
   ## Brief change log
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
     - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
     - *Extended integration test for recovery after master (JobManager) 
failure*
     - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
     - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): (yes / no)
     - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
     - The serializers: (yes / no / don't know)
     - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
     - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
     - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
     - Does this pull request introduce a new feature? (yes / no)
     - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to