datapythonista commented on code in PR #2730:
URL: https://github.com/apache/arrow-rs/pull/2730#discussion_r972079444
##########
parquet/src/arrow/mod.rs:
##########
@@ -66,26 +66,24 @@
//! # Example of reading parquet file into arrow record batch
//!
//! ```rust
-//! use arrow::record_batch::RecordBatchReader;
-//! use parquet::file::reader::{FileReader, SerializedFileReader};
-//! use parquet::arrow::{ParquetFileArrowReader, ArrowReader, ProjectionMask};
-//! use std::sync::Arc;
//! use std::fs::File;
+//! use parquet::arrow::arrow_reader::ParquetRecordBatchReaderBuilder;
//!
+//! # use std::sync::Arc;
//! # use arrow::array::Int32Array;
//! # use arrow::datatypes::{DataType, Field, Schema};
//! # use arrow::record_batch::RecordBatch;
//! # use parquet::arrow::arrow_writer::ArrowWriter;
+//! #
//! # let ids = Int32Array::from(vec![1, 2, 3, 4]);
//! # let schema = Arc::new(Schema::new(vec![
-//! # Field::new("id", DataType::Int32, false),
+//! # Field::new("id", DataType::Int32, false),
//! # ]));
//! #
//! # // Write to a memory buffer (can also write to a File)
Review Comment:
Thanks for the heads up, I didn't realize. I just removed it, I don't think
it adds much value to say we're writing to the file here. But let me know if
you've got a better idea.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]