imaffe opened a new pull request, #20886:
URL: https://github.com/apache/flink/pull/20886

   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
     - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
     - *Deployments RPC transmits only the blob storage reference*
     - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
     - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
     - *Extended integration test for recovery after master (JobManager) 
failure*
     - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
     - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): (yes / no)
     - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
     - The serializers: (yes / no / don't know)
     - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
     - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
     - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   Yes this PR involves documentation
   
   ## Some notes
   
   Here are some thought about this PR. any suggestions are greatly welcomed 
and it would be nice to have your thoughts~ 
   
   - Currently this PR does not involve documentation. we want to create a 
different PR for SQL Connector documentation. WDYT ?
   - The e2e test (SQL client e2e test) is not present in this PR as well. This 
is because we didn't have the SQL Client e2e test in our fork, and since it's 
new code, we want to create a new PR for e2e 
   - The naming of commits does not  have a JIRA ticket to track. This is 
because I want to keep each commit as independent as possible during review 
process. Ideally we will rebase these commits and add JIRA number later.
   
   There are 16 commits numbered from a to p ( I don't know why I didn't use 
numbers XD, a rush a blood to the head probably). I tried to make the commits 
as independent as possible.
   
   - a to c: SQL Connector code
   - d to g: Testing code
   - h: packaging and manifest change. I am not sure if this part is correct 
because we did some hack on the pom when we were maintaining our own fork. Your 
input is important here ~ 
   - i: doc and doc generation configs
   - j to o: support SQL Connector involves some changes in the DataStream 
connector code base as well. I prefixed such commits with "DataStream"
   
   Other commits  are checkstyle or some later changes. For later fixing 
commits (after review), I'll use number plus the letter to track which commits 
the fixing points to .
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to