Github user OopsOutOfMemory commented on a diff in the pull request:
https://github.com/apache/spark/pull/6551#discussion_r31886263
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -39,6 +39,16 @@ class SQLQuerySuite extends QueryTest with
BeforeAndAfterAll with SQLTestUtils {
val sqlContext = TestSQLContext
import sqlContext.implicits._
+ test("SPARK-8010: promote numeric to string") {
+ val df = Seq((1, 1)).toDF("key", "value")
+ df.registerTempTable("src")
+ val queryCaseWhen = sql("select case when true then 1.0 else '1' end
from src ")
+ val queryCoalesce = sql("select coalesce(null, 1, '1') from src ")
--- End diff --
@yhuai, actually we often write querys by using udf in then value and else
value, like below:
`select case when boolean then split(city_code, ',')[0] else -99 end from
tablename `
Hive will implicit convert the case when expression value to a string type
since `split function` returns string type but `else value` is a integer.
Spark sql current will throw exceptions because the types of then value and
else value is not convertible.
Why we use `StringType` is because when do implicit conversion in
AtomicType, almost every type meets `StringType` will be converted to
`StringType` except `BinaryType` and `BooleanType`
You can refer the chart at the bottom of the page:
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]