Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22036#discussion_r209481344
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzePartitionCommand.scala
 ---
    @@ -140,7 +140,13 @@ case class AnalyzePartitionCommand(
         val df = tableDf.filter(Column(filter)).groupBy(partitionColumns: 
_*).count()
     
         df.collect().map { r =>
    -      val partitionColumnValues = 
partitionColumns.indices.map(r.get(_).toString)
    +      val partitionColumnValues = partitionColumns.indices.map { i =>
    +        if (r.isNullAt(i)) {
    +          ExternalCatalogUtils.DEFAULT_PARTITION_NAME
    --- End diff --
    
    do we need to chang the read path? i.e. where we use these statistics.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to