alamb commented on code in PR #9450:
URL: https://github.com/apache/arrow-rs/pull/9450#discussion_r2956045936
##########
parquet/src/lib.rs:
##########
@@ -67,6 +67,28 @@
//! * [`ArrowColumnWriter`] for writing using multiple threads,
//! * [`RowFilter`] to apply filters during decode
//!
+//! ### EXPERIMENTAL: Content-Defined Chunking
+//!
+//! [`ArrowWriter`] supports content-defined chunking (CDC), which creates
data page
+//! boundaries based on content rather than fixed sizes. CDC enables efficient
+//! deduplication in content-addressable storage (CAS) systems: when the same
data
+//! appears in successive file versions, it will produce identical byte
sequences that
+//! CAS backends can deduplicate.
+//!
+//! Enable CDC via [`WriterProperties`]:
+//!
+//! ```no_run
Review Comment:
Any reason not to run this example? It seems like it should work just find 🤷
##########
parquet/src/arrow/arrow_writer/mod.rs:
##########
@@ -958,7 +1009,26 @@ impl ArrowRowGroupWriter {
let mut writers = self.writers.iter_mut();
for (field, column) in
self.schema.fields().iter().zip(batch.columns()) {
for leaf in compute_leaves(field.as_ref(), column)? {
- writers.next().unwrap().write(&leaf)?
+ writers.next().unwrap().write(&leaf)?;
+ }
+ }
+ Ok(())
+ }
+
+ fn write_with_chunkers(
Review Comment:
having two code paths (write and write_with_chunkers) is kind of weird to me
and seems inconisistent with other optional features like encoding or
compression
I wonder if it would encapsulate the code more if we extended `write` with a
`Option<&[ContentDefinedChunke])` rather than have two separate external
functions
##########
parquet/src/column/chunker/cdc_codegen.py:
##########
Review Comment:
I think it is fine to check this in
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]