[ 
https://issues.apache.org/jira/browse/PARQUET-1166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16392304#comment-16392304
 ] 

ASF GitHub Bot commented on PARQUET-1166:
-----------------------------------------

advancedxy commented on a change in pull request #445: [WIP] PARQUET-1166: Add 
GetRecordBatchReader in parquet/arrow/reader
URL: https://github.com/apache/parquet-cpp/pull/445#discussion_r173353428
 
 

 ##########
 File path: src/parquet/arrow/reader.cc
 ##########
 @@ -152,6 +153,64 @@ class SingleRowGroupIterator : public FileColumnIterator {
   bool done_;
 };
 
+class RowGroupRecordBatchReader : public ::arrow::RecordBatchReader {
+ public:
+    explicit RowGroupRecordBatchReader(const std::vector<int>& 
row_group_indices,
+                                       const std::vector<int>& column_indices,
+                                       FileReader* reader)
+      : row_group_indices_(row_group_indices),
+        column_indices_(column_indices),
+        file_reader_(reader),
+        next_row_group_(0) {
+      file_reader_->GetSchema(column_indices_, &schema_);
+    }
+
+    ~RowGroupRecordBatchReader() {}
+
+    std::shared_ptr<::arrow::Schema> schema() const override {
+      return schema_;
+    }
+
+    Status ReadNext(std::shared_ptr<::arrow::RecordBatch> *out) override {
+      if (table_ != nullptr) { // one row group has been loaded
+        std::shared_ptr<::arrow::RecordBatch> tmp;
+        table_batch_reader_->ReadNext(&tmp);
+        if (tmp != nullptr) { // some column chunks are left in table
+          *out = tmp;
+          return Status::OK();
+        } else { // the entire table is consumed
+          table_batch_reader_.reset();
+          table_.reset();
+        }
+      }
+
+      // all row groups has been consumed
+      if (next_row_group_ == row_group_indices_.size()) {
+        *out = nullptr;
+        return Status::OK();
+      }
+
+      
RETURN_NOT_OK(file_reader_->ReadRowGroup(row_group_indices_[next_row_group_],
 
 Review comment:
   I am most concern about this one. We have to read one entire row group, but 
the caller may consume only the first N RecordBatches.
   
   I am wondering that this is not optimal

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [API Proposal] Add GetRecordBatchReader in parquet/arrow/reader.h
> -----------------------------------------------------------------
>
>                 Key: PARQUET-1166
>                 URL: https://issues.apache.org/jira/browse/PARQUET-1166
>             Project: Parquet
>          Issue Type: Improvement
>          Components: parquet-cpp
>            Reporter: Xianjin YE
>            Priority: Major
>
> Hi, I'd like to proposal a new API to better support splittable reading for 
> Parquet File.
> The intent for this API is that we can selective reading RowGroups(normally 
> be contiguous, but can be arbitrary as long as the row_group_idxes are sorted 
> and unique, [1, 3, 5] for example). 
> The proposed API would be something like this:
> {code:java}
> ::arrow::Status GetRecordBatchReader(const std::vector<int>& 
> row_group_indices,
>                                                                 
> std::shared_ptr<::arrow::RecordBatchReader>* out);
>                 
> ::arrow::Status GetRecordBatchReader(const std::vector<int>& 
> row_group_indices,
>                                                                 const 
> std::vector<int>& column_indices,
>                                                                 
> std::shared_ptr<::arrow::RecordBatchReader>* out);
> {code}
> With new API, we can split Parquet file into RowGroups and can be processed 
> by multiple tasks(maybe be on different hosts, like the Map task in MapReduce)
> [~wesmckinn][~xhochy] What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to