Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/3208#discussion_r20775962
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SqlParser.scala ---
@@ -339,18 +339,15 @@ class SqlParser extends AbstractSparkSQLParser {
| floatLit ^^ { f => Literal(f.toDouble) }
)
- private val longMax = BigDecimal(s"${Long.MaxValue}")
- private val longMin = BigDecimal(s"${Long.MinValue}")
- private val intMax = BigDecimal(s"${Int.MaxValue}")
- private val intMin = BigDecimal(s"${Int.MinValue}")
-
private def toNarrowestIntegerType(value: String) = {
val bigIntValue = BigDecimal(value)
bigIntValue match {
- case v if v < longMin || v > longMax => v
- case v if v < intMin || v > intMax => v.toLong
- case v => v.toInt
+ case v if bigIntValue.isValidByte => v.toByteExact
+ case v if bigIntValue.isValidShort => v.toShortExact
+ case v if bigIntValue.isValidInt => v.toIntExact
+ case v if bigIntValue.isValidLong => v.toLongExact
+ case v => v
--- End diff --
Recently I have debugged few bugs in Presto database. So I tried to look at
how Presto treat the integer literal. I found that it just uses Long to
represent all number types. Although it does not meant all SQL systems doing
this, I think that it can be a reference here.
So I will remove the byte and short types in this PR and let this PR can be
merged then. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]