[
https://issues.apache.org/jira/browse/SPARK-15388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15289710#comment-15289710
]
Apache Spark commented on SPARK-15388:
--------------------------------------
User 'wangyang1992' has created a pull request for this issue:
https://github.com/apache/spark/pull/13177
> spark sql "CREATE FUNCTION" throws exception with hive 1.2.1
> ------------------------------------------------------------
>
> Key: SPARK-15388
> URL: https://issues.apache.org/jira/browse/SPARK-15388
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.0.0
> Reporter: Yang Wang
>
> spark.sql("CREATE FUNCTION MY_FUNCTION_1 AS
> 'com.haizhi.bdp.udf.UDFGetGeoCode'") throws
> org.apache.spark.sql.AnalysisException.
> I was using hive whose version is 1.2.1
> Full stack trace is as follows:
> Exception in thread "main" org.apache.spark.sql.AnalysisException:
> org.apache.hadoop.hive.ql.metadata.HiveException:
> MetaException(message:NoSuchObjectException(message:Function
> bdp.GET_GEO_CODE does not exist));
> at
> org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:71)
> at
> org.apache.spark.sql.hive.HiveExternalCatalog.functionExists(HiveExternalCatalog.scala:323)
> at
> org.apache.spark.sql.catalyst.catalog.SessionCatalog.functionExists(SessionCatalog.scala:712)
> at
> org.apache.spark.sql.catalyst.catalog.SessionCatalog.createFunction(SessionCatalog.scala:663)
> at
> org.apache.spark.sql.execution.command.CreateFunction.run(functions.scala:68)
> at
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:57)
> at
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:55)
> at
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:69)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> at
> org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
> at
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:85)
> at
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:85)
> at org.apache.spark.sql.Dataset.<init>(Dataset.scala:187)
> at org.apache.spark.sql.Dataset.<init>(Dataset.scala:168)
> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:63)
> at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:541)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]