rdettai commented on a change in pull request #2000:
URL: https://github.com/apache/arrow-datafusion/pull/2000#discussion_r830852358



##########
File path: datafusion/src/physical_plan/file_format/parquet.rs
##########
@@ -236,32 +237,56 @@ impl ExecutionPlan for ParquetExec {
 
         let adapter = SchemaAdapter::new(self.base_config.file_schema.clone());
 
-        let join_handle = task::spawn_blocking(move || {
-            if let Err(e) = read_partition(
-                object_store.as_ref(),
-                adapter,
-                partition_index,
-                &partition,
-                metrics,
-                &projection,
-                &pruning_predicate,
-                batch_size,
-                response_tx.clone(),
-                limit,
-                partition_col_proj,
-            ) {
-                println!(
+        let join_handle = if projection.is_empty() {

Review comment:
       can't we have this conditional within the spawn_blocking statement?

##########
File path: datafusion/src/physical_plan/file_format/parquet.rs
##########
@@ -446,6 +471,62 @@ fn build_row_group_predicate(
     }
 }
 
+fn read_partition_no_file_columns(

Review comment:
       You don't need to open all row group to get the number of rows in 
parquet, you have all this information in the footer. You should use 
`file_reader.metadata()` here. Once you do that, you can spare yourself the 
limit logic that is pretty verbose. You just iterate through all row groups in 
the metadata to count all the rows in the file, that's very cheap because the 
data structure is loaded to memory when the footer is parsed. This should 
simplify greatly this code path, then we can re-evaluate if we need to merge it 
with the one above or not 😉.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to