Github user yijieshen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/7283#discussion_r34175586
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala
 ---
    @@ -199,23 +199,23 @@ case class Cast(child: Expression, dataType: 
DataType) extends UnaryExpression w
         // TimestampWritable.floatToTimestamp
         case FloatType =>
           buildCast[Float](_, f => try {
    -        decimalToTimestamp(Decimal(f))
    +        decimalToTimestamp(Decimal(f.toString))
    --- End diff --
    
    let me try to explain this:
    
    `15.002` as a float number isn't stored as an exact num, actually, it's 
represented by its nearest neighbour `15.00199985504150400000`, when `(f * 1000 
* 1000 * 10 ).longValue` is used, `150019998` was preserved as the internal 
long value. when using `(ts / 10000000.0).toFloat` to convert back to float, 
**the nearest neighbour didn't change**, therefore equals the origin value.
    
    However, when using `(f * 1000 * 1000).longValue`, 15001999 was stored 
internally, and converting it back to a float would lead to another neighbour 
`15.00199890136718800000`, therefore fail the original test. Note: it was due 
to lost of trailing 8 change the neighbour


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to