viirya commented on code in PR #424:
URL: https://github.com/apache/datafusion-comet/pull/424#discussion_r1600515535


##########
core/src/execution/datafusion/spark_hash.rs:
##########
@@ -193,27 +241,67 @@ macro_rules! hash_array_decimal {
 fn create_hashes_dictionary<K: ArrowDictionaryKeyType>(
     array: &ArrayRef,
     hashes_buffer: &mut [u32],
+    multi_col: bool,
 ) -> Result<()> {
     let dict_array = 
array.as_any().downcast_ref::<DictionaryArray<K>>().unwrap();
+    if multi_col {
+        // unpack the dictionary array as each row may have a different hash 
input
+        let unpacked = take(dict_array.values().as_ref(), dict_array.keys(), 
None)?;
+        create_hashes(&[unpacked], hashes_buffer)?;
+    } else {
+        // Hash each dictionary value once, and then use that computed
+        // hash for each key value to avoid a potentially expensive
+        // redundant hashing for large dictionary elements (e.g. strings)
+        let dict_values = Arc::clone(dict_array.values());
+        // same initial seed as Spark
+        let mut dict_hashes = vec![42; dict_values.len()];
+        create_hashes(&[dict_values], &mut dict_hashes)?;

Review Comment:
   Hmm, then I think this approach is actually incorrect, even for first 
column. We cannot pre-compute hashes for values. The key/value order is not the 
same as value order.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to