corwinjoy commented on code in PR #39677:
URL: https://github.com/apache/arrow/pull/39677#discussion_r1456615967


##########
cpp/src/parquet/column_writer.cc:
##########
@@ -323,6 +323,19 @@ class SerializedPageWriter : public PageWriter {
 
     PARQUET_THROW_NOT_OK(sink_->Write(output_data_buffer, output_data_len));
 
+    // Add the dictionary page to the offsets
+    if (offset_index_builder_ != nullptr) {
+      const int64_t compressed_size = output_data_len + header_size;
+      if (compressed_size > std::numeric_limits<int32_t>::max()) {
+        throw ParquetException("Compressed dictionary page size overflows to 
INT32_MAX.");
+      }
+
+      /// start_pos is a relative offset in the buffered mode. It should be
+      /// adjusted via OffsetIndexBuilder::Finish() after BufferedPageWriter
+      /// has flushed all data pages.
+      offset_index_builder_->AddPage(start_pos, 
static_cast<int32_t>(compressed_size), -1);
+    }

Review Comment:
   I think this may be a bug or oversight in the existing implementation? When 
decoding data for a rowgroup, if a dictionary is enabled you need to read the 
dictionary page. Currently this dictionary page is not written to the 
OffsetIndex section and, without it, there is no reliable way to know where the 
dictionary page can be found (due to variable sizes). Therefore, and since it 
is a type of data page, I believe it should be written to the OffsetIndex 
collection. Anyway, the current implementation needs it to decode dictionary 
data.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to