This is an automated email from the ASF dual-hosted git repository.

apitrou pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/arrow.git


The following commit(s) were added to refs/heads/main by this push:
     new adbeabfda7 GH-34509: [C++][Parquet] Improve docstrings for 
ArrowReaderProperties::batch_size (#36486)
adbeabfda7 is described below

commit adbeabfda74a50fbcaef4cd8e6169fff26ae09a8
Author: mwish <[email protected]>
AuthorDate: Thu Jul 6 21:52:05 2023 +0800

    GH-34509: [C++][Parquet] Improve docstrings for 
ArrowReaderProperties::batch_size (#36486)
    
    Make it clear that the `batch_size` setting is ignored by some Parquet 
FileReader APIs.
    * Closes: #34509
    
    Lead-authored-by: mwish <[email protected]>
    Co-authored-by: Antoine Pitrou <[email protected]>
    Signed-off-by: Antoine Pitrou <[email protected]>
---
 cpp/src/parquet/properties.h | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/cpp/src/parquet/properties.h b/cpp/src/parquet/properties.h
index 0a69659508..c195ab8079 100644
--- a/cpp/src/parquet/properties.h
+++ b/cpp/src/parquet/properties.h
@@ -817,11 +817,14 @@ class PARQUET_EXPORT ArrowReaderProperties {
     }
   }
 
-  /// \brief Set the maximum number of rows to read into a chunk or record 
batch.
+  /// \brief Set the maximum number of rows to read into a record batch.
   ///
   /// Will only be fewer rows when there are no more rows in the file.
+  /// Note that some APIs such as ReadTable may ignore this setting.
   void set_batch_size(int64_t batch_size) { batch_size_ = batch_size; }
-  /// Return the batch size.
+  /// Return the batch size in rows.
+  ///
+  /// Note that some APIs such as ReadTable may ignore this setting.
   int64_t batch_size() const { return batch_size_; }
 
   /// Enable read coalescing (default false).

Reply via email to