Kevin Zhang created SPARK-23498:

             Summary: Accuracy problem in comparison with string and integer
                 Key: SPARK-23498
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 2.2.1, 2.2.0, 2.3.0
            Reporter: Kevin Zhang

While comparing a string column with integer value, spark sql will 
automatically cast the string operant to int, the following sql will return 
true in hive but false in spark

select '1000.1'>1000

 from the physical plan we can see the string operant was cast to int which 
caused the accuracy loss
*Project [false AS (CAST(1000.1 AS INT) > 1000)#4]

+- Scan OneRowRelation[]
Similar to SPARK-22469, I think it's safe to use double a common type to cast 
both side of operants to.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to