kbendick commented on a change in pull request #3529:
URL: https://github.com/apache/iceberg/pull/3529#discussion_r747023205



##########
File path: 
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/SparkDistributionAndOrderingUtil.java
##########
@@ -71,7 +72,8 @@ public static Distribution buildRequiredDistribution(Table 
table, DistributionMo
   }
 
   public static SortOrder[] convert(org.apache.iceberg.SortOrder sortOrder) {
-    List<OrderField> converted = SortOrderVisitor.visit(sortOrder, new 
SortOrderToSpark());
+    Map<Integer, String> quotedNameById = 
SparkSchemaUtil.indexQuotedNameById(sortOrder.schema());

Review comment:
       I do think it's right to be a bit concerned. I often hear about tables 
that have hundred(s) of columns.
   
   If I'm not mistaken though, we're only indexing the columns that are sorted 
/ involved in the transform spec. Even when tables have hundreds of columns, do 
we think it's very common to have nearly as many in the partition spec / sort 
ordering?
   
   If I have the correct, I personally don't see the need to add that 
additional complexity. At the least, it could be handled in another PR.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to