Github user kiszk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22233#discussion_r213009058
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
    @@ -671,7 +674,7 @@ case class AlterTableRecoverPartitionsCommand(
             val value = ExternalCatalogUtils.unescapePathName(ps(1))
             if (resolver(columnName, partitionNames.head)) {
               scanPartitions(spark, fs, filter, st.getPath, spec ++ 
Map(partitionNames.head -> value),
    -            partitionNames.drop(1), threshold, resolver)
    +            partitionNames.drop(1), threshold, resolver, 
listFilesInParallel = false)
    --- End diff --
    
    Thank you attaching the stack trace. I have just looked at it. It looks 
strange to me. Every thread is `waiting for`. No blocker is there, only one 
`locked` exists.
    In typical case, a deadlock occurs due to existence of blocker as attached 
stack trace in #22221
    
    I will investigate it furthermore tomorrow if we need to use this 
implementation instead of reverting it to the original implementation to use 
Scala parallel collection.
    
    ```
    ...
            - parking to wait for  <0x0000000793c0d610> (a 
scala.concurrent.impl.Promise$CompletionLatch)
            at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
            at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
            at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
            at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
            at 
scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:206)
            at 
scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:222)
            at 
scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:227)
            at 
org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:220)
            at org.apache.spark.util.ThreadUtils$.parmap(ThreadUtils.scala:317)
            at 
org.apache.spark.sql.execution.command.AlterTableRecoverPartitionsCommand.scanPartitions(ddl.scala:690)
            at 
org.apache.spark.sql.execution.command.AlterTableRecoverPartitionsCommand.run(ddl.scala:626)
            at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
            - locked <0x0000000793b04e88> (a 
org.apache.spark.sql.execution.command.ExecutedCommandExec)
            at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
            at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
    ...
    ```


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to