gaogaotiantian opened a new pull request, #53783:
URL: https://github.com/apache/spark/pull/53783

   <!--
   Thanks for sending a pull request!  Here are some tips for you:
     1. If this is your first time, please read our contributor guidelines: 
https://spark.apache.org/contributing.html
     2. Ensure you have added or run the appropriate tests for your PR: 
https://spark.apache.org/developer-tools.html
     3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., 
'[WIP][SPARK-XXXX] Your PR title ...'.
     4. Be sure to keep the PR description updated to reflect all changes.
     5. Please write your PR title to summarize what this PR proposes.
     6. If possible, provide a concise example to reproduce the issue for a 
faster review.
     7. If you want to add a new configuration, please read the guideline first 
for naming configurations in
        
'core/src/main/scala/org/apache/spark/internal/config/ConfigEntry.scala'.
     8. If you want to add or modify an error type or message, please read the 
guideline first in
        'common/utils/src/main/resources/error/README.md'.
   -->
   
   ### What changes were proposed in this pull request?
   <!--
   Please clarify what changes you are proposing. The purpose of this section 
is to outline the changes and how this PR fixes the issue. 
   If possible, please consider writing useful notes for better and faster 
reviews in your PR. See the examples below.
     1. If you refactor some codes with changing classes, showing the class 
hierarchy will help reviewers.
     2. If you fix some SQL features, you can provide some references of other 
DBMSes.
     3. If there is design documentation, please add the link.
     4. If there is a discussion in the mailing list, please add the link.
   -->
   
   Commented out `del_remote_cache` in `RemoteModelRef`. This is a temporary 
solution for the deadlock issue of `test_distributed_lda`. It's acceptable but 
there might be better options.
   
   ### Why are the changes needed?
   <!--
   Please clarify why the changes are needed. For instance,
     1. If you propose a new API, clarify the use case for a new API.
     2. If you fix a bug, you can clarify why it is a bug.
   -->
   
   We have a very flaky test `test_distributed_lda` 
[Failure1](https://github.com/gaogaotiantian/spark/actions/runs/20940523104/job/60173040026),
 
[Failure2](https://github.com/apache/spark/actions/runs/20938424034/job/60166745846),
 
[Failure3](https://github.com/apache/spark/actions/runs/20908277105/job/60066145491).
 This is a deadlock issue with the traceback:
   
   ```
       (Python) File 
"/__w/spark/spark/python/pyspark/ml/tests/connect/test_parity_clustering.py", 
line 30, in <module>
           main()
       (Python) File 
"/__w/spark/spark/python/pyspark/testing/unittestutils.py", line 43, in main
           unittest.main(module=module, testRunner=testRunner, verbosity=2)
       (Python) File "/usr/lib/python3.11/unittest/main.py", line 102, in 
__init__
           self.runTests()
       (Python) File "/usr/lib/python3.11/unittest/main.py", line 274, in 
runTests
           self.result = testRunner.run(self.test)
       (Python) File 
"/usr/local/lib/python3.11/dist-packages/xmlrunner/runner.py", line 67, in run
           test(result)
       (Python) File "/usr/lib/python3.11/unittest/suite.py", line 84, in 
__call__
           return self.run(*args, **kwds)
       (Python) File "/usr/lib/python3.11/unittest/suite.py", line 122, in run
           test(result)
       (Python) File "/usr/lib/python3.11/unittest/suite.py", line 84, in 
__call__
           return self.run(*args, **kwds)
       (Python) File "/usr/lib/python3.11/unittest/suite.py", line 122, in run
           test(result)
       (Python) File "/usr/lib/python3.11/unittest/case.py", line 678, in 
__call__
           return self.run(*args, **kwds)
       (Python) File "/usr/lib/python3.11/unittest/case.py", line 623, in run
           self._callTestMethod(testMethod)
       (Python) File "/usr/lib/python3.11/unittest/case.py", line 579, in 
_callTestMethod
           if method() is not None:
       (Python) File 
"/__w/spark/spark/python/pyspark/ml/tests/test_clustering.py", line 466, in 
test_distributed_lda
           self.assertEqual(str(model), str(model2))
       (Python) File "/__w/spark/spark/python/pyspark/ml/wrapper.py", line 474, 
in __repr__
           return self._call_java("toString")
       (Python) File "/__w/spark/spark/python/pyspark/ml/util.py", line 322, in 
wrapped
           return remote_call()
       (Python) File "/__w/spark/spark/python/pyspark/ml/util.py", line 308, in 
remote_call
           (_, properties, _) = session.client.execute_command(command)
       (Python) File 
"/__w/spark/spark/python/pyspark/sql/connect/client/core.py", line 1162, in 
execute_command
           data, _, metrics, observed_metrics, properties = 
self._execute_and_fetch(
       (Python) File 
"/__w/spark/spark/python/pyspark/sql/connect/client/core.py", line 1664, in 
_execute_and_fetch
           for response in self._execute_and_fetch_as_iterator(
       (Python) File 
"/__w/spark/spark/python/pyspark/sql/connect/client/core.py", line 1621, in 
_execute_and_fetch_as_iterator
           generator = ExecutePlanResponseReattachableIterator(
       (Python) File 
"/__w/spark/spark/python/pyspark/sql/connect/client/reattach.py", line 127, in 
__init__
           self._stub.ExecutePlan(self._initial_request, metadata=metadata)
       (Python) File 
"/usr/local/lib/python3.11/dist-packages/grpc/_channel.py", line 1396, in 
__call__
           call = self._managed_call(
       (Python) File 
"/usr/local/lib/python3.11/dist-packages/grpc/_channel.py", line 1785, in create
           call = state.channel.integrated_call(
       (Python) File "/usr/lib/python3.11/threading.py", line 905, in __init__
           self._started = Event()
       (Python) File "/usr/lib/python3.11/threading.py", line 563, in __init__
           self._cond = Condition(Lock())
       (Python) File "/usr/lib/python3.11/threading.py", line 254, in __init__
           self._release_save = lock._release_save
       (Python) File "/__w/spark/spark/python/pyspark/ml/util.py", line 379, in 
wrapped
           self._remote_model_obj.release_ref()
       (Python) File "/__w/spark/spark/python/pyspark/ml/util.py", line 162, in 
release_ref
           del_remote_cache(self.ref_id)
       (Python) File "/__w/spark/spark/python/pyspark/ml/util.py", line 358, in 
del_remote_cache
           session.client._delete_ml_cache([ref_id])
       (Python) File 
"/__w/spark/spark/python/pyspark/sql/connect/client/core.py", line 2137, in 
_delete_ml_cache
           (_, properties, _) = self.execute_command(command)
       (Python) File 
"/__w/spark/spark/python/pyspark/sql/connect/client/core.py", line 1162, in 
execute_command
           data, _, metrics, observed_metrics, properties = 
self._execute_and_fetch(
       (Python) File 
"/__w/spark/spark/python/pyspark/sql/connect/client/core.py", line 1664, in 
_execute_and_fetch
           for response in self._execute_and_fetch_as_iterator(
       (Python) File 
"/__w/spark/spark/python/pyspark/sql/connect/client/core.py", line 1621, in 
_execute_and_fetch_as_iterator
           generator = ExecutePlanResponseReattachableIterator(
       (Python) File 
"/__w/spark/spark/python/pyspark/sql/connect/client/reattach.py", line 127, in 
__init__
           self._stub.ExecutePlan(self._initial_request, metadata=metadata)
       (Python) File 
"/usr/local/lib/python3.11/dist-packages/grpc/_channel.py", line 1396, in 
__call__
           call = self._managed_call(
       (Python) File 
"/usr/local/lib/python3.11/dist-packages/grpc/_channel.py", line 1784, in create
           with state.lock:
   ```
   
   The deadlock happened in `create` function, the issue is that it gets the 
exclusive lock to create a gRPC command, then another gRPC command is being 
created during the process and trying to get the same exclusive lock (which is 
not re-entriable).
   
   This is triggered by a garbage collection, then `__del__` method of 
`JavaWrapper` tried to release cache by sending a gRPC command to remote.
   
   The design is fundamentally wrong and there is no easy fix. You simply can't 
send gRPC command in `__del__` because that method can be triggered at any 
arbitrary point (as gc can happen anytime).
   
   I can think of three options:
   
   1. Like this, we just do not clear cache. Simplest way to solve the issue. 
Connect client will send a command to clear cache when it's being deleted.
   2. We stop gc while we make command. Not the best solution but it might just 
work. This kind of plays with Python internal mechanism but the code would be 
clean and we can have the desired behavior.
   3. Do something like `execute_command_later`, and queue the commands to run 
in the next synchronous call to `execute_command`. A bit refactoring to the 
existing framework, command will be delayed, but code wise it might be the most 
accurate.
   
   Notice that this problem is not only affecting this specific test, or even 
our general test suite. This is a real deadlock problem that could happen to 
our users. We should backport the final fix.
   
   For now, we need to make a quick decision about how to mitigate the flaky 
test - it's interrupting our workflow. One flaky test will make the whole 
workflow fragile.
   
   ### Does this PR introduce _any_ user-facing change?
   <!--
   Note that it means *any* user-facing change including all aspects such as 
new features, bug fixes, or other behavior changes. Documentation-only updates 
are not considered user-facing changes.
   
   If yes, please clarify the previous behavior and the change this PR proposes 
- provide the console output, description and/or an example to show the 
behavior difference if possible.
   If possible, please also clarify if this is a user-facing change compared to 
the released Spark versions or within the unreleased branches such as master.
   If no, write 'No'.
   -->
   
   Not really, user should not know that cache is not being cleared eagerly.
   
   ### How was this patch tested?
   <!--
   If tests were added, say they were added here. Please make sure to add some 
test cases that check the changes thoroughly including negative and positive 
cases if possible.
   If it was tested in a way different from regular unit tests, please clarify 
how you tested step by step, ideally copy and paste-able, so that other 
reviewers can test and check, and descendants can verify in the future.
   If tests were not added, please describe why they were not added and/or why 
it was difficult to add.
   If benchmark tests were added, please run the benchmarks in GitHub Actions 
for the consistent environment, and the instructions could accord to: 
https://spark.apache.org/developer-tools.html#github-workflow-benchmarks.
   -->
   
   It's not super reproducible so we need to rely on CI.
   
   ### Was this patch authored or co-authored using generative AI tooling?
   <!--
   If generative AI tooling has been used in the process of authoring this 
patch, please include the
   phrase: 'Generated-by: ' followed by the name of the tool and its version.
   If no, write 'No'.
   Please refer to the [ASF Generative Tooling 
Guidance](https://www.apache.org/legal/generative-tooling.html) for details.
   -->
   
   No


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to