Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18853#discussion_r155364153
  
    --- Diff: docs/sql-programming-guide.md ---
    @@ -1490,6 +1490,16 @@ that these options will be deprecated in future 
release as more optimizations ar
           Configures the number of partitions to use when shuffling data for 
joins or aggregations.
         </td>
       </tr>
    +  <tr>
    +    <td><code>spark.sql.typeCoercion.mode</code></td>
    +    <td><code>default</code></td>
    +    <td>
    +      The <code>default</code> type coercion mode was used in spark prior 
to 2.3.0, and so it
    +      continues to be the default to avoid breaking behavior. However, it 
has logical
    +      inconsistencies. The <code>hive</code> mode is preferred for most 
new applications, though
    +      it may require additional manual casting.
    --- End diff --
    
    > Since Spark 2.3, the <code>hive</code> mode is introduced for Hive 
compatiblity. Spark SQL has its native type cocersion mode, which is enabled by 
default.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to