viirya opened a new pull request #30827:
URL: https://github.com/apache/spark/pull/30827
<!--
Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, please read our contributor guidelines:
https://spark.apache.org/contributing.html
2. Ensure you have added or run the appropriate tests for your PR:
https://spark.apache.org/developer-tools.html
3. If the PR is unfinished, add '[WIP]' in your PR title, e.g.,
'[WIP][SPARK-XXXX] Your PR title ...'.
4. Be sure to keep the PR description updated to reflect all changes.
5. Please write your PR title to summarize what this PR proposes.
6. If possible, provide a concise example to reproduce the issue for a
faster review.
7. If you want to add a new configuration, please read the guideline first
for naming configurations in
'core/src/main/scala/org/apache/spark/internal/config/ConfigEntry.scala'.
-->
### What changes were proposed in this pull request?
<!--
Please clarify what changes you are proposing. The purpose of this section
is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR. See the examples below.
1. If you refactor some codes with changing classes, showing the class
hierarchy will help reviewers.
2. If you fix some SQL features, you can provide some references of other
DBMSes.
3. If there is design documentation, please add the link.
4. If there is a discussion in the mailing list, please add the link.
-->
This patch proposes to unload inactive state store as soon as possible. The
timing of unload inactive state stores, happens when we get to load active
state store provider at executors. At the time, state store coordinator will
return back the state store provider list including loaded stores that are
already loaded by other executors in new batch. Each state store provider in
the list will go to unload.
### Why are the changes needed?
<!--
Please clarify why the changes are needed. For instance,
1. If you propose a new API, clarify the use case for a new API.
2. If you fix a bug, you can clarify why it is a bug.
-->
Per the discussion at #30770, it makes sense to me we should unload inactive
state store asap. Now we run a maintenance task periodically to unload inactive
state stores. So there will be some delays between a state store becomes
inactive and it is unloaded.
However, we can force Spark to always allocate a state store to same
executor, by using task locality configuration. This can reduce the possibility
to have inactive state store.
Normally, with locality configuration, we might not able to see inactive
state store generally. There is still chance an executor can be failed and
reallocated, but in this case, inactive state store is also lost too. So it is
not an issue.
Making driver-executor bi-directional for unloading inactive state store
looks non-trivial, and seems to me, it is not worth, after considering what we
can do with locality.
This proposes a simpler but effective approach. We can check if loaded state
store is already loaded at other executor during reporting active state store
to the coordinator. If so, it means the loaded store is inactive now, and it is
going to be unload by the next maintenance task. Then we unload that store
immediately.
How do we make sure the loaded state store in previous batch is loaded at
other executor in this batch before reporting in this executor? With task
locality and preferred location, once an executor is ready to be scheduled,
Spark should assign the state store provider previously loaded at the executor.
So when this executor gets a new assignment other than previously loaded state
store, it means the previously loaded one is already assigned to other executor.
There is still a delay between the state store is loaded at other executor,
and unloading it when reporting active state store at this executor. But there
won't be multiple state store belonging to same operator are loaded at the same
time at one single executor, because once the executor reports any active
store, it will unload all inactive stores. This should not be an issue IMHO.
This is a minimal change to unload inactive state store asap without
significant change.
### Does this PR introduce _any_ user-facing change?
<!--
Note that it means *any* user-facing change including all aspects such as
the documentation fix.
If yes, please clarify the previous behavior and the change this PR proposes
- provide the console output, description and/or an example to show the
behavior difference if possible.
If possible, please also clarify if this is a user-facing change compared to
the released Spark versions or within the unreleased branches such as master.
If no, write 'No'.
-->
No
### How was this patch tested?
<!--
If tests were added, say they were added here. Please make sure to add some
test cases that check the changes thoroughly including negative and positive
cases if possible.
If it was tested in a way different from regular unit tests, please clarify
how you tested step by step, ideally copy and paste-able, so that other
reviewers can test and check, and descendants can verify in the future.
If tests were not added, please describe why they were not added and/or why
it was difficult to add.
-->
Unit test.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]