alamb commented on code in PR #8652:
URL: https://github.com/apache/arrow-rs/pull/8652#discussion_r2586669463


##########
arrow-data/src/transform/mod.rs:
##########
@@ -672,12 +674,24 @@ impl<'a> MutableArrayData<'a> {
                             next_offset += dict_len;
                         }
 
-                        build_extend_dictionary(array, offset, offset + 
dict_len)
+                        // -1 since offset is exclusive
+                        build_extend_dictionary(array, offset, 1.max(offset + 
dict_len) - 1)
                             .ok_or(ArrowError::DictionaryKeyOverflowError)
                     })
-                    .collect();
-
-                extend_values.expect("MutableArrayData::new is infallible")
+                    .collect::<Result<Vec<_>, ArrowError>>();
+                match result {
+                    Err(_) => {

Review Comment:
   I think we should only retry when the Err is `DictionaryKeyOverflowError` -- 
this code retries regardless of the underlying error



##########
arrow-data/src/transform/dictionary.rs:
##########
@@ -0,0 +1,140 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+use std::collections::HashMap;
+
+use arrow_buffer::ArrowNativeType;
+use arrow_schema::{ArrowError, DataType};
+
+use crate::{
+    ArrayData,
+    transform::{_MutableArrayData, Extend, MutableArrayData, 
utils::iter_in_bytes},
+};
+
+pub(crate) fn merge_dictionaries<'a>(

Review Comment:
   Could we please leave some comments about what this function does and why it 
is needed? (aka explain the overflow backup case)



##########
arrow-data/src/transform/utils.rs:
##########
@@ -58,6 +61,37 @@ pub(super) unsafe fn get_last_offset<T: 
ArrowNativeType>(offset_buffer: &Mutable
     *unsafe { offsets.get_unchecked(offsets.len() - 1) }
 }
 
+fn iter_in_bytes_variable_sized<T: ArrowNativeType + Integer>(data: 
&ArrayData) -> Vec<&[u8]> {
+    let offsets = data.buffer::<T>(0);
+
+    // the offsets of the `ArrayData` are ignored as they are only applied to 
the offset buffer.
+    let values = data.buffers()[1].as_slice();
+    (0..data.len())
+        .map(move |i| {
+            let start = offsets[i].to_usize().unwrap();
+            let end = offsets[i + 1].to_usize().unwrap();
+            &values[start..end]
+        })
+        .collect::<Vec<_>>()
+}
+
+fn iter_in_bytes_fixed_sized(data: &ArrayData, size: usize) -> Vec<&[u8]> {
+    let values = &data.buffers()[0].as_slice()[data.offset() * size..];
+    values.chunks(size).collect::<Vec<_>>()
+}
+
+/// iterate values in raw bytes regardless nullability
+pub(crate) fn iter_in_bytes<'a>(data_type: &DataType, data: &'a ArrayData) -> 
Vec<&'a [u8]> {

Review Comment:
   this is called `iter_in_bytes...` but it returns a vec 🤔 



##########
arrow-data/src/transform/dictionary.rs:
##########
@@ -0,0 +1,140 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+use std::collections::HashMap;
+
+use arrow_buffer::ArrowNativeType;
+use arrow_schema::{ArrowError, DataType};
+
+use crate::{
+    ArrayData,
+    transform::{_MutableArrayData, Extend, MutableArrayData, 
utils::iter_in_bytes},
+};
+
+pub(crate) fn merge_dictionaries<'a>(
+    key_data_type: &DataType,
+    value_data_type: &DataType,
+    dicts: &[&'a ArrayData],
+) -> Result<(Vec<Extend<'a>>, ArrayData), ArrowError> {
+    match key_data_type {
+        DataType::UInt8 => merge_dictionaries_casted::<u8>(value_data_type, 
dicts),
+        DataType::UInt16 => merge_dictionaries_casted::<u16>(value_data_type, 
dicts),
+        DataType::UInt32 => merge_dictionaries_casted::<u32>(value_data_type, 
dicts),
+        DataType::UInt64 => merge_dictionaries_casted::<u64>(value_data_type, 
dicts),
+        DataType::Int8 => merge_dictionaries_casted::<i8>(value_data_type, 
dicts),
+        DataType::Int16 => merge_dictionaries_casted::<i16>(value_data_type, 
dicts),
+        DataType::Int32 => merge_dictionaries_casted::<i32>(value_data_type, 
dicts),
+        DataType::Int64 => merge_dictionaries_casted::<i64>(value_data_type, 
dicts),
+        _ => unreachable!(),
+    }
+}
+
+fn merge_dictionaries_casted<'a, K: ArrowNativeType>(
+    data_type: &DataType,
+    dicts: &[&'a ArrayData],
+) -> Result<(Vec<Extend<'a>>, ArrayData), ArrowError> {
+    let mut dedup = HashMap::new();
+    let mut indices = vec![];
+    let mut data_refs = vec![];
+    let new_dict_keys = dicts
+        .iter()
+        .enumerate()
+        .map(|(dict_idx, dict)| {
+            let value_data = dict.child_data().first().unwrap();
+            let old_keys = dict.buffer::<K>(0);
+            data_refs.push(value_data);
+            let mut new_keys = vec![K::usize_as(0); old_keys.len()];
+            let values = iter_in_bytes(data_type, value_data);
+            for (key_index, old_key) in old_keys.iter().enumerate() {
+                if dict.is_valid(key_index) {
+                    let value = values[old_key.as_usize()];
+                    match K::from_usize(dedup.len()) {
+                        Some(idx) => {
+                            let idx_for_value = 
dedup.entry(value).or_insert(idx);
+                            // a new entry
+                            if *idx_for_value == idx {
+                                indices.push((dict_idx, old_key.as_usize()));
+                            }
+
+                            new_keys[key_index] = *idx_for_value;
+                        }
+                        // the built dictionary has reach the cap of the key 
type
+                        None => match dedup.get(value) {
+                            // as long as this value has already been indexed
+                            // the merge dictionary is still valid
+                            Some(previous_key) => {
+                                new_keys[key_index] = *previous_key;
+                            }
+                            None => return 
Err(ArrowError::DictionaryKeyOverflowError),

Review Comment:
   I ran coverage of this code
   
   ```shell
   cargo llvm-cov --html test -p arrow-select
   ```
   
   And found this error path (where the fallback also errors with 
DictionaryKeyOverflowError) appears not to be covered 
   
   <img width="1119" height="970" alt="Screenshot 2025-12-03 at 4 34 14 PM" 
src="https://github.com/user-attachments/assets/3ccc76c4-5246-48aa-8f6f-e1654fbdb3ae";
 />
   
   Can you please add a test that covers this?



##########
arrow-data/src/transform/mod.rs:
##########
@@ -672,12 +674,24 @@ impl<'a> MutableArrayData<'a> {
                             next_offset += dict_len;
                         }
 
-                        build_extend_dictionary(array, offset, offset + 
dict_len)
+                        // -1 since offset is exclusive
+                        build_extend_dictionary(array, offset, 1.max(offset + 
dict_len) - 1)
                             .ok_or(ArrowError::DictionaryKeyOverflowError)
                     })
-                    .collect();
-
-                extend_values.expect("MutableArrayData::new is infallible")
+                    .collect::<Result<Vec<_>, ArrowError>>();
+                match result {
+                    Err(_) => {

Review Comment:
   I also think that it would help to add a comment explaining the rationale 
for this fallback -- namely something like "if the dictionary key overflows, it 
means there are too many keys in the concatenated dictionary -- in that case 
fall back to the slower path of merging (deduplicating) the dictionaries



##########
arrow-select/src/interleave.rs:
##########
@@ -1182,4 +1188,296 @@ mod tests {
         assert_eq!(v.len(), 1);
         assert_eq!(v.data_type(), &DataType::Struct(fields));
     }
+    fn create_dict_arr<K: ArrowDictionaryKeyType>(
+        keys: Vec<K::Native>,
+        null_keys: Option<Vec<bool>>,
+        values: Vec<u16>,
+    ) -> ArrayRef {
+        let input_keys =
+            PrimitiveArray::<K>::from_iter_values_with_nulls(keys, 
null_keys.map(NullBuffer::from));
+        let input_values = UInt16Array::from_iter_values(values);
+        let input = DictionaryArray::new(input_keys, Arc::new(input_values));
+        Arc::new(input) as ArrayRef
+    }
+
+    fn create_dict_list_arr(
+        keys: Vec<u8>,
+        null_keys: Option<Vec<bool>>,
+        values: Vec<u16>,
+        lengths: Vec<usize>,
+        list_nulls: Option<Vec<bool>>,
+    ) -> ArrayRef {
+        let dict_arr = {
+            let input_1_keys =
+                UInt8Array::from_iter_values_with_nulls(keys, 
null_keys.map(NullBuffer::from));
+            let input_1_values = UInt16Array::from_iter_values(values);
+            DictionaryArray::new(input_1_keys, Arc::new(input_1_values))
+        };
+
+        let offset_buffer = OffsetBuffer::<i32>::from_lengths(lengths);
+        let list_arr = GenericListArray::new(
+            Arc::new(Field::new_dictionary(
+                "item",
+                DataType::UInt8,
+                DataType::UInt16,
+                true,
+            )),
+            offset_buffer,
+            Arc::new(dict_arr) as ArrayRef,
+            list_nulls.map(NullBuffer::from_iter),
+        );
+        Arc::new(list_arr) as ArrayRef
+    }
+
+    #[test]

Review Comment:
   Can you also please add a basic non nested test too -- like concatenate 
multiple dictionary arrays together that have more keys than representable with 
the key size?
   
   Also, that would be a good thing to extend with a test with more distinct 
keys



##########
arrow-data/src/transform/mod.rs:
##########
@@ -672,12 +674,24 @@ impl<'a> MutableArrayData<'a> {
                             next_offset += dict_len;
                         }
 
-                        build_extend_dictionary(array, offset, offset + 
dict_len)
+                        // -1 since offset is exclusive
+                        build_extend_dictionary(array, offset, 1.max(offset + 
dict_len) - 1)
                             .ok_or(ArrowError::DictionaryKeyOverflowError)
                     })
-                    .collect();
-
-                extend_values.expect("MutableArrayData::new is infallible")
+                    .collect::<Result<Vec<_>, ArrowError>>();
+                match result {
+                    Err(_) => {

Review Comment:
   Also, I was confused for a while about how this could   detect an error as 
it happens when constructing the extended, not when actually running
   
   I think I understand now (it hasn't changed in this PR) -- that the maximum 
dictionary key is computed based on each dictionary size which make sense
   



##########
arrow-data/src/transform/mod.rs:
##########
@@ -672,12 +674,24 @@ impl<'a> MutableArrayData<'a> {
                             next_offset += dict_len;
                         }
 
-                        build_extend_dictionary(array, offset, offset + 
dict_len)
+                        // -1 since offset is exclusive
+                        build_extend_dictionary(array, offset, 1.max(offset + 
dict_len) - 1)

Review Comment:
   Rather than converting an Option --> Result, what about changing 
`build_extend_dictionary` to return the Result directly (this would also make 
it easier to esure you only returned `DictionaryKeyOverflowError` when the key 
actually overflowed(



##########
arrow-data/src/transform/mod.rs:
##########
@@ -672,12 +674,24 @@ impl<'a> MutableArrayData<'a> {
                             next_offset += dict_len;
                         }
 
-                        build_extend_dictionary(array, offset, offset + 
dict_len)
+                        // -1 since offset is exclusive

Review Comment:
   I don't understand this comment or hange
   
   I revered the change
   ```diff
   (venv) andrewlamb@Andrews-MacBook-Pro-3:~/Software/arrow-rs$ git diff
   diff --git a/arrow-data/src/transform/mod.rs 
b/arrow-data/src/transform/mod.rs
   index 12b03bbdf0..76a116c4cd 100644
   --- a/arrow-data/src/transform/mod.rs
   +++ b/arrow-data/src/transform/mod.rs
   @@ -674,8 +674,7 @@ impl<'a> MutableArrayData<'a> {
                                next_offset += dict_len;
                            }
   
   -                        // -1 since offset is exclusive
   -                        build_extend_dictionary(array, offset, 1.max(offset 
+ dict_len) - 1)
   +                        build_extend_dictionary(array, offset, offset + 
dict_len)
                                .ok_or(ArrowError::DictionaryKeyOverflowError)
                        })
                        .collect::<Result<Vec<_>, ArrowError>>();
   ```
   
   And the tests still pass 🤔 
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to