alamb commented on code in PR #7934:
URL: https://github.com/apache/arrow-rs/pull/7934#discussion_r2211458267
##########
parquet-variant/src/variant/object.rs:
##########
@@ -244,16 +252,22 @@ impl<'m, 'v> VariantObject<'m, 'v> {
// to check lexicographical order
//
// Since we are probing the metadata dictionary by field id,
this also verifies field ids are in-bounds
- let are_field_names_sorted = field_ids
- .iter()
- .map(|&i| self.metadata.get(i))
- .collect::<Result<Vec<_>, _>>()?
- .is_sorted();
-
- if !are_field_names_sorted {
- return Err(ArrowError::InvalidArgumentError(
- "field names not sorted".to_string(),
- ));
+ let mut current_field_name = match field_ids_iter.next() {
Review Comment:
It wasn't added in this PR, but this check for the field names being sorted
doesn't seem right to me -- I thought the only requirement on an object's
fields were that the field_ids were sorted (so lookup by field_id can be fast)
but the corresponding names of the fields don't have to be sorted
Maybe @friendlymatthew can help
##########
parquet-variant/src/variant/object.rs:
##########
@@ -217,23 +217,31 @@ impl<'m, 'v> VariantObject<'m, 'v> {
self.header.field_ids_start_byte() as
_..self.first_field_offset_byte as _,
)?;
- let field_ids = map_bytes_to_offsets(field_id_buffer,
self.header.field_id_size)
- .collect::<Vec<_>>();
-
+ let mut field_ids_iter =
+ map_bytes_to_offsets(field_id_buffer,
self.header.field_id_size);
// Validate all field ids exist in the metadata dictionary and the
corresponding field names are lexicographically sorted
if self.metadata.is_sorted() {
// Since the metadata dictionary has unique and sorted field
names, we can also guarantee this object's field names
// are lexicographically sorted by their field id ordering
- if !field_ids.is_sorted() {
- return Err(ArrowError::InvalidArgumentError(
- "field names not sorted".to_string(),
- ));
- }
+ let dictionary_size = self.metadata.dictionary_size();
+
+ if let Some(mut current_id) = field_ids_iter.next() {
+ for next_id in field_ids_iter {
+ if current_id >= dictionary_size {
+ return Err(ArrowError::InvalidArgumentError(
+ "field id is not valid".to_string(),
+ ));
+ }
+
+ if next_id <= current_id {
+ return Err(ArrowError::InvalidArgumentError(
+ "field names not sorted".to_string(),
+ ));
+ }
+ current_id = next_id;
+ }
- // Since field ids are sorted, if the last field is smaller
than the dictionary size,
- // we also know all field ids are smaller than the dictionary
size and in-bounds.
- if let Some(&last_field_id) = field_ids.last() {
- if last_field_id >= self.metadata.dictionary_size() {
+ if current_id >= dictionary_size {
Review Comment:
It took me a few times to figure out why this (redundant) check was still
needed
I am not sure if there is some way to refactor the loop to avoid this
(perhaps by keeping `previous_id: Option<u32>` as you did in the loop above 🤔
No changes needed, I just figured I would point it out
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]