yaooqinn edited a comment on pull request #28963:
URL: https://github.com/apache/spark/pull/28963#issuecomment-652893272
here is an example
## w/o pr
```sql
kentyao@hulk ~/Downloads/spark/spark-3.1.0-SNAPSHOT-bin-20200620
bin/beeline -u 'jdbc:hive2://localhost:10000/default;a=bc;'
Connecting to jdbc:hive2://localhost:10000/default;a=bc;
log4j:WARN No appenders could be found for logger
(org.apache.hive.jdbc.Utils).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
more info.
Connected to: Spark SQL (version 3.1.0-SNAPSHOT)
Driver: Hive JDBC (version 2.3.7)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 2.3.7 by Apache Hive
0: jdbc:hive2://localhost:10000/default> select date_sub(date'2011-11-11',
'1.2');
Error: Error running query: java.lang.NumberFormatException: invalid input
syntax for type numeric: 1.2 (state=,code=0)
```
## w/ pr
```sql
kentyao@hulk ~/Downloads/spark/spark-3.1.0-SNAPSHOT-bin-20200630
bin/beeline -u 'jdbc:hive2://localhost:10000/default;a=bc;'
log4j:WARN No appenders could be found for logger
(org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
more info.
Connecting to jdbc:hive2://localhost:10000/default;a=bc;
Connected to: Spark SQL (version 3.1.0-SNAPSHOT)
Driver: Hive JDBC (version 2.3.7)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 2.3.7 by Apache Hive
0: jdbc:hive2://localhost:10000/default> select date_sub(date'2011-11-11',
'1.2');
Error: org.apache.hive.service.cli.HiveSQLException: Error running query:
org.apache.spark.sql.AnalysisException: The second argument of 'date_sub'
function needs to be an integer.;
at
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:322)
at
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.$anonfun$run$1(SparkExecuteStatementOperation.scala:222)
at
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at
org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties(SparkOperation.scala:78)
at
org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties$(SparkOperation.scala:62)
at
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:46)
at
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:222)
at
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1$$anon$2.run(SparkExecuteStatementOperation.scala:217)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$1.run(SparkExecuteStatementOperation.scala:233)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
....
Caused by: java.lang.NumberFormatException: invalid input syntax for type
numeric: 1.2
at
org.apache.spark.unsafe.types.UTF8String.toIntExact(UTF8String.java:1335)
at
org.apache.spark.sql.catalyst.expressions.CastBase.$anonfun$castToInt$2(Cast.scala:515)
at
org.apache.spark.sql.catalyst.expressions.CastBase.$anonfun$castToInt$2$adapted(Cast.scala:515)
at
org.apache.spark.sql.catalyst.expressions.CastBase.buildCast(Cast.scala:295)
at
org.apache.spark.sql.catalyst.expressions.CastBase.$anonfun$castToInt$1(Cast.scala:515)
at
org.apache.spark.sql.catalyst.expressions.CastBase.nullSafeEval(Cast.scala:824)
at
org.apache.spark.sql.catalyst.expressions.UnaryExpression.eval(Expression.scala:475)
at
org.apache.spark.sql.catalyst.analysis.TypeCoercion$StringLiteralCoercion$$anonfun$coerceTypes$14.applyOrElse(TypeCoercion.scala:1094)
... 96 more (state=,code=0)
```
You can see in the logs above, w/o this the `AnalysisException` is missing.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]