Github user MaxGekk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22233#discussion_r213050406
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
    @@ -671,7 +674,7 @@ case class AlterTableRecoverPartitionsCommand(
             val value = ExternalCatalogUtils.unescapePathName(ps(1))
             if (resolver(columnName, partitionNames.head)) {
               scanPartitions(spark, fs, filter, st.getPath, spec ++ 
Map(partitionNames.head -> value),
    -            partitionNames.drop(1), threshold, resolver)
    +            partitionNames.drop(1), threshold, resolver, 
listFilesInParallel = false)
    --- End diff --
    
    I think the root cause is clear - fixed thread pool + submitting and 
waiting a future inside of another future from the the same thread pool. 
@gatorsmile I will revert parallel collection back here if you don't mind since 
there is no reasons for `parmap` in this place.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to