rdblue commented on a change in pull request #227: ORC column map fix
URL: https://github.com/apache/incubator-iceberg/pull/227#discussion_r296335431
 
 

 ##########
 File path: 
spark/src/main/java/org/apache/iceberg/spark/data/SparkOrcReader.java
 ##########
 @@ -58,27 +55,25 @@
  */
 public class SparkOrcReader implements OrcValueReader<InternalRow> {
   private final static int INITIAL_SIZE = 128 * 1024;
-  private final int numFields;
-  private final TypeDescription readSchema;
+  private final List<TypeDescription> columns;
   private final Converter[] converters;
 
-  public SparkOrcReader(Schema readSchema) {
-    this.readSchema = TypeConversion.toOrc(readSchema, new ColumnIdMap());
-    numFields = readSchema.columns().size();
+  public SparkOrcReader(TypeDescription readOrcSchema) {
+    columns = readOrcSchema.getChildren();
     converters = buildConverters();
   }
 
   private Converter[] buildConverters() {
-    final Converter[] converters = new Converter[numFields];
-    for(int c = 0; c < numFields; ++c) {
-      converters[c] = buildConverter(readSchema.getChildren().get(c));
+    final Converter[] converters = new Converter[columns.size()];
+    for(int c = 0; c < columns.size(); ++c) {
 
 Review comment:
   Nit: avoid the `size` method call by using `converters.length`.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to