Github user viirya commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10845#discussion_r50228717
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/HiveTypeCoercion.scala
 ---
    @@ -696,6 +697,43 @@ object HiveTypeCoercion {
       }
     
       /**
    +   * Strength reduction for comparisons between an integral column and a 
decimal literal:
    +   *
    +   * 1. int_col > decimal_literal => int_col > floor(decimal_literal)
    +   * 2. int_col >= decimal_literal => int_col >= ceil(decimal_literal)
    +   * 3. int_col < decimal_literal => int_col < ceil(decimal_literal)
    +   * 4. int_col <= decimal_literal => int_col <= floor(decimal_literal)
    +   * 5. decimal_literal > int_col => ceil(decimal_literal) > int_col
    +   * 6. decimal_literal >= int_col => floor(decimal_literal) >= int_col
    +   * 7. decimal_literal < int_col => floor(decimal_literal) < int_col
    +   * 8. decimal_literal <= int_col => ceil(decimal_literal) <= int_col
    +   *
    +   */
    +  object SimplifyIntegerDecimalComparing extends Rule[LogicalPlan] {
    --- End diff --
    
    I am thinking, is it okay to simply removing these type cast here? I meant, 
we can't make sure where these type cast are added. If we do this optimization 
by removing type casting around the integer attribute and decimal literal, will 
we introduce possible bugs when users add type casting manually?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to