Github user dongjoon-hyun commented on the issue: https://github.com/apache/spark/pull/19686 Thank you for review. Yes. Technically, the previous one is not wrong. It means an unordered set of attributes. So it's minor. In both [FileFormat.buildReaderWithPartitionValues](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormat.scala#L119) and [ParquetFileFormat.buildReaderWithPartitionValues](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala#L297), `fullSchema` has the consistent meaning like the following and it should be ordered. `resultSchema` is not a different one from `fullSchema`, but it looks like in an opposite way and makes me confused at a glance. IMO, it would be better to be consistent with `fullSchema` while reading the code. ```scala val fullSchema = requiredSchema.toAttributes ++ partitionSchema.toAttributes ```
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org