coderfender commented on code in PR #3559:
URL: https://github.com/apache/datafusion-comet/pull/3559#discussion_r2893122333
##########
native/spark-expr/src/conversion_funcs/numeric.rs:
##########
@@ -75,6 +75,56 @@ pub(crate) fn
is_df_cast_from_decimal_spark_compatible(to_type: &DataType) -> bo
)
}
+macro_rules! cast_float_to_timestamp_impl {
+ ($array:expr, $builder:expr, $primitive_type:ty, $eval_mode:expr) => {{
+ let arr = $array.as_primitive::<$primitive_type>();
+ for i in 0..arr.len() {
+ if arr.is_null(i) {
+ $builder.append_null();
+ } else {
+ let val = arr.value(i) as f64;
+ // Path 1: NaN/Infinity check - error says TIMESTAMP
+ if val.is_nan() || val.is_infinite() {
+ if $eval_mode == EvalMode::Ansi {
+ return Err(SparkError::CastInvalidValue {
+ value: val.to_string(),
+ from_type: "DOUBLE".to_string(),
+ to_type: "TIMESTAMP".to_string(),
+ });
+ }
+ $builder.append_null();
+ } else {
+ // Path 2: Multiply then check overflow - error says BIGINT
+ let micros = val * MICROS_PER_SECOND as f64;
+ if micros.floor() <= i64::MAX as f64 && micros.ceil() >=
i64::MIN as f64 {
Review Comment:
@parthchandra that is a valid point . However, spark does follow the same
inconsistencies as far as I understand. This is the relevant code from Spark I
was replicating on the comet side
https://github.com/apache/spark/blob/972897433082b1a7136b877b4fa37970961169d0/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala#L794
(non ansi)
ANSI :
https://github.com/apache/spark/blob/972897433082b1a7136b877b4fa37970961169d0/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala#L74
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]