parthchandra commented on code in PR #2032:
URL: https://github.com/apache/datafusion-comet/pull/2032#discussion_r2217038689


##########
common/src/main/java/org/apache/comet/parquet/BatchReader.java:
##########
@@ -183,9 +183,7 @@ public BatchReader(
     this.taskContext = TaskContext$.MODULE$.get();
   }
 
-  public BatchReader(AbstractColumnReader[] columnReaders) {
-    // Todo: set useDecimal128 and useLazyMaterialization
-    int numColumns = columnReaders.length;
+  public BatchReader(int numColumns) {

Review Comment:
   > passed to Batch Reader at 
[here](https://github.com/apache/iceberg/blob/main/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java#L86)
   
   This does not sound correct. BatchReader has no API to set a column reader 
other than the constructor changed by this PR. The only other way to set column 
readers is by calling `init` which will then create the appropriate column 
readers. 
   Also, I notice that the current version of the constructor ignores the 
column readers passed in.
   Any BatchReader created with this constructor (either this PR or the current 
version) will have an array of nulls as the column readers.
   
   How is such a BatchReader usable (or useful)?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org
For additional commands, e-mail: github-h...@datafusion.apache.org

Reply via email to