Github user vinodkc commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22171#discussion_r212251852
  
    --- Diff: sql/core/src/test/resources/sql-tests/results/literals.sql.out ---
    @@ -197,7 +197,7 @@ select .e3
     -- !query 20
     select 1E309, -1E309
     -- !query 20 schema
    -struct<1E+309:decimal(1,-309),-1E+309:decimal(1,-309)>
    
+struct<1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000:decimal(1,-309),-1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000:decimal(1,-309)>
    --- End diff --
    
    @viirya This schema is auto generated. 
      Actual issue is only with 0 value when scale higher than 6. If we need to 
reduce the scope of impact, can we add this condition?
    ```
    override def toString: String = if (decimalVal == 0 && _scale > 6) {
        toBigDecimal.bigDecimal.toPlainString()
      } else {
        toBigDecimal.toString()
      }
    ```


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to