Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13093#discussion_r64156604
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -441,7 +441,15 @@ class Analyzer(
object ResolveRelations extends Rule[LogicalPlan] {
private def lookupTableFromCatalog(u: UnresolvedRelation): LogicalPlan
= {
try {
- catalog.lookupRelation(u.tableIdentifier, u.alias)
+ val table = catalog.lookupRelation(u.tableIdentifier, u.alias)
+ table match {
+ case SubqueryAlias(_, s: SimpleCatalogRelation)
--- End diff --
For the initial Spark users, they might instantiate `SparkSession` without
Hive support. You know, by default, we do not turn on Hive support. Then, they
could try to create a table and insert some rows. Then, they will hit the
following exception:
```
org.apache.spark.sql.AnalysisException: unresolved operator
'SimpleCatalogRelation default,
CatalogTable(`default`.`tbl`,CatalogTableType(MANAGED),CatalogStorageFormat(None,Some(org.apache.hadoop.mapred.TextInputFormat),Some(org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat),None,false,Map()),List(CatalogColumn(i,int,true,None),
CatalogColumn(j,string,true,None)),List(),List(),List(),-1,,1463928681802,-1,Map(),None,None,None,List()),
None;
```
These external users are unable to know what the exceptions mean. : ( I
think it is very critial to provide some meaningful messages to them.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]