peterxcli commented on code in PR #3448:
URL: https://github.com/apache/datafusion-comet/pull/3448#discussion_r2781673932


##########
native/spark-expr/src/datetime_funcs/extract_date_part.rs:
##########
@@ -86,9 +87,32 @@ macro_rules! extract_date_part {
                         let result = date_part(&array, 
DatePart::$date_part_variant)?;
                         Ok(ColumnarValue::Array(result))
                     }
-                    _ => Err(DataFusionError::Execution(
-                        concat!($fn_name, "(scalar) should be fold in Spark 
JVM side.").to_string(),
-                    )),
+                    [ColumnarValue::Scalar(scalar)] => {
+                        // When Spark's ConstantFolding is disabled, 
literal-only expressions like
+                        // hour can reach the native engine as scalar inputs.
+                        // Instead of failing and requiring JVM folding, we 
evaluate the scalar
+                        // natively by broadcasting it to a single-element 
array and then
+                        // converting the result back to a scalar.
+                        let array = scalar.clone().to_array_of_size(1)?;
+                        let array = array_with_timezone(
+                            array,
+                            self.timezone.clone(),
+                            Some(&DataType::Timestamp(
+                                Microsecond,
+                                Some(self.timezone.clone().into()),
+                            )),
+                        )?;
+                        let result = date_part(&array, 
DatePart::$date_part_variant)?;
+                        let result_arr = result
+                            .as_any()
+                            .downcast_ref::<Int32Array>()
+                            .expect("date_part should return Int32Array");
+
+                        let scalar_result =
+                            ScalarValue::try_from_array(result_arr, 
0).map_err(DataFusionError::from)?;
+
+                        Ok(ColumnarValue::Scalar(scalar_result))

Review Comment:
   Why not return the array as is so we dont need to add scalar type handler in 
`SparkUnixTimestamp`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to