[jira] [Commented] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *
[ https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17468405#comment-17468405 ] jerryMa commented on SPARK-28990: - i use spark-sql run a sql file,but also ocurrs error like below : catalyst.analysis.UnresolvedException: Invalid call to toAttribute on unresolved object, tree: *, exactly the really errors is that a column doesn't exists in sql .i also test sql "create table default.spark as select * from default.dual11;" in spark 3.0,3.2,it also have the same errors,somebody can explain it ? ` org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to toAttribute on unresolved object, tree: * at org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:225) at org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:50) at org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:50) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) at scala.collection.immutable.List.foreach(List.scala:381) at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at scala.collection.immutable.List.map(List.scala:285) at org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:50) at org.apache.spark.sql.catalyst.plans.QueryPlan.schema$lzycompute(QueryPlan.scala:330) at org.apache.spark.sql.catalyst.plans.QueryPlan.schema(QueryPlan.scala:330) at org.apache.spark.sql.hive.execution.CreateHiveTableAsSelectCommand.run(CreateHiveTableAsSelectCommand.scala:72) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58) at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56) at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:67) at org.apache.spark.sql.Dataset.(Dataset.scala:182) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:67) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623) at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:691) at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:62) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:340) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:248) at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) ` > SparkSQL invalid call to toAttribute on unresolved object, tree: * > -- > > Key: SPARK-28990 > URL: https://issues.apache.org/jira/browse/SPARK-28990 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.3 >Reporter: fengchaoge >Priority: Major > > SparkSQL create table as select from one table which may not exists throw > exceptions like: > {code} > org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to > toAttribute on unresolved object, tree: > {code} > This is not friendly, spark user may have no idea about what's wrong. > Simple sql can reproduce it,like this: > {code} > spark-sql (default)> create table default.spark as select * from default.dual; > {code} > {code} > 2019-09-05 16:27:24,127 INFO (main) [Logging.scala:logInfo(54)] - Parsing > command: create table default.spark as select * from default.dual > 2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in > [create table default.spark as select * from default.dual] > org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to > toAttribute on unresolved object, tree: * > at > org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245) > at > org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(
[jira] [Commented] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *
[ https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17038983#comment-17038983 ] Gary Scott commented on SPARK-28990: [~fengchaoge] [~stephenwoo] [~xiaozhang] - I have the same issue, can you please advise on when this will be resolved? > SparkSQL invalid call to toAttribute on unresolved object, tree: * > -- > > Key: SPARK-28990 > URL: https://issues.apache.org/jira/browse/SPARK-28990 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.3 >Reporter: fengchaoge >Priority: Major > > SparkSQL create table as select from one table which may not exists throw > exceptions like: > {code} > org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to > toAttribute on unresolved object, tree: > {code} > This is not friendly, spark user may have no idea about what's wrong. > Simple sql can reproduce it,like this: > {code} > spark-sql (default)> create table default.spark as select * from default.dual; > {code} > {code} > 2019-09-05 16:27:24,127 INFO (main) [Logging.scala:logInfo(54)] - Parsing > command: create table default.spark as select * from default.dual > 2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in > [create table default.spark as select * from default.dual] > org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to > toAttribute on unresolved object, tree: * > at > org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245) > at > org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52) > at > org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.immutable.List.foreach(List.scala:392) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.immutable.List.map(List.scala:296) > at > org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52) > at > org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160) > at > org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108) > at > org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29) > at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148) > at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84) > at > scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57) > at > scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66) > at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76) > at scala.collection.immutable.List.foreach(List.scala:392) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76) > at > org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Anal
[jira] [Commented] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *
[ https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17002724#comment-17002724 ] lucusguo commented on SPARK-28990: -- [~fengchaoge] [~stephenwoo] [~xiaozhang] spark-sql (default)> create table default.spark as select * from default.dual; in this sql, if the database name or table name not exists,it can simply reproduce,like below sql, and table dual11,not exists spark-sql (default)> create table default.spark as select * from default.dual11; so It's really unfriendly > SparkSQL invalid call to toAttribute on unresolved object, tree: * > -- > > Key: SPARK-28990 > URL: https://issues.apache.org/jira/browse/SPARK-28990 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.3 >Reporter: fengchaoge >Priority: Major > > SparkSQL create table as select from one table which may not exists throw > exceptions like: > {code} > org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to > toAttribute on unresolved object, tree: > {code} > This is not friendly, spark user may have no idea about what's wrong. > Simple sql can reproduce it,like this: > {code} > spark-sql (default)> create table default.spark as select * from default.dual; > {code} > {code} > 2019-09-05 16:27:24,127 INFO (main) [Logging.scala:logInfo(54)] - Parsing > command: create table default.spark as select * from default.dual > 2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in > [create table default.spark as select * from default.dual] > org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to > toAttribute on unresolved object, tree: * > at > org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245) > at > org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52) > at > org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.immutable.List.foreach(List.scala:392) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.immutable.List.map(List.scala:296) > at > org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52) > at > org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160) > at > org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108) > at > org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29) > at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148) > at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84) > at > scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57) > at > scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66) > at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76) > at scala.collection.immutable.List.foreach(
[jira] [Commented] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *
[ https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17002698#comment-17002698 ] Wenchao Wu commented on SPARK-28990: [~lucusguo] [~xiaozhang] me too > SparkSQL invalid call to toAttribute on unresolved object, tree: * > -- > > Key: SPARK-28990 > URL: https://issues.apache.org/jira/browse/SPARK-28990 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.3 >Reporter: fengchaoge >Priority: Major > > SparkSQL create table as select from one table which may not exists throw > exceptions like: > {code} > org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to > toAttribute on unresolved object, tree: > {code} > This is not friendly, spark user may have no idea about what's wrong. > Simple sql can reproduce it,like this: > {code} > spark-sql (default)> create table default.spark as select * from default.dual; > {code} > {code} > 2019-09-05 16:27:24,127 INFO (main) [Logging.scala:logInfo(54)] - Parsing > command: create table default.spark as select * from default.dual > 2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in > [create table default.spark as select * from default.dual] > org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to > toAttribute on unresolved object, tree: * > at > org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245) > at > org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52) > at > org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.immutable.List.foreach(List.scala:392) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.immutable.List.map(List.scala:296) > at > org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52) > at > org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160) > at > org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108) > at > org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29) > at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148) > at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84) > at > scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57) > at > scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66) > at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76) > at scala.collection.immutable.List.foreach(List.scala:392) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76) > at > org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127) > at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.sc
[jira] [Commented] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *
[ https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17002696#comment-17002696 ] Xiao Zhang commented on SPARK-28990: [~fengchaoge] me too > SparkSQL invalid call to toAttribute on unresolved object, tree: * > -- > > Key: SPARK-28990 > URL: https://issues.apache.org/jira/browse/SPARK-28990 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.3 >Reporter: fengchaoge >Priority: Major > > SparkSQL create table as select from one table which may not exists throw > exceptions like: > {code} > org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to > toAttribute on unresolved object, tree: > {code} > This is not friendly, spark user may have no idea about what's wrong. > Simple sql can reproduce it,like this: > {code} > spark-sql (default)> create table default.spark as select * from default.dual; > {code} > {code} > 2019-09-05 16:27:24,127 INFO (main) [Logging.scala:logInfo(54)] - Parsing > command: create table default.spark as select * from default.dual > 2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in > [create table default.spark as select * from default.dual] > org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to > toAttribute on unresolved object, tree: * > at > org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245) > at > org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52) > at > org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.immutable.List.foreach(List.scala:392) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.immutable.List.map(List.scala:296) > at > org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52) > at > org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160) > at > org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108) > at > org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29) > at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148) > at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84) > at > scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57) > at > scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66) > at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76) > at scala.collection.immutable.List.foreach(List.scala:392) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76) > at > org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127) > at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:121) >
[jira] [Commented] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *
[ https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17002695#comment-17002695 ] lucusguo commented on SPARK-28990: -- but, I cannot reproduce it in spark2.4.3 > SparkSQL invalid call to toAttribute on unresolved object, tree: * > -- > > Key: SPARK-28990 > URL: https://issues.apache.org/jira/browse/SPARK-28990 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.3 >Reporter: fengchaoge >Priority: Major > > SparkSQL create table as select from one table which may not exists throw > exceptions like: > {code} > org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to > toAttribute on unresolved object, tree: > {code} > This is not friendly, spark user may have no idea about what's wrong. > Simple sql can reproduce it,like this: > {code} > spark-sql (default)> create table default.spark as select * from default.dual; > {code} > {code} > 2019-09-05 16:27:24,127 INFO (main) [Logging.scala:logInfo(54)] - Parsing > command: create table default.spark as select * from default.dual > 2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in > [create table default.spark as select * from default.dual] > org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to > toAttribute on unresolved object, tree: * > at > org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245) > at > org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52) > at > org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.immutable.List.foreach(List.scala:392) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.immutable.List.map(List.scala:296) > at > org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52) > at > org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160) > at > org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108) > at > org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29) > at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148) > at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84) > at > scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57) > at > scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66) > at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76) > at scala.collection.immutable.List.foreach(List.scala:392) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76) > at > org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127) > at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analy
[jira] [Commented] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *
[ https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16937725#comment-16937725 ] fengchaoge commented on SPARK-28990: spark3.0 does fix this problem, but i'd like to know what have changed. > SparkSQL invalid call to toAttribute on unresolved object, tree: * > -- > > Key: SPARK-28990 > URL: https://issues.apache.org/jira/browse/SPARK-28990 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.3 >Reporter: fengchaoge >Priority: Major > > SparkSQL create table as select from one table which may not exists throw > exceptions like: > {code} > org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to > toAttribute on unresolved object, tree: > {code} > This is not friendly, spark user may have no idea about what's wrong. > Simple sql can reproduce it,like this: > {code} > spark-sql (default)> create table default.spark as select * from default.dual; > {code} > {code} > 2019-09-05 16:27:24,127 INFO (main) [Logging.scala:logInfo(54)] - Parsing > command: create table default.spark as select * from default.dual > 2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in > [create table default.spark as select * from default.dual] > org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to > toAttribute on unresolved object, tree: * > at > org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245) > at > org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52) > at > org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.immutable.List.foreach(List.scala:392) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.immutable.List.map(List.scala:296) > at > org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52) > at > org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160) > at > org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108) > at > org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29) > at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148) > at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84) > at > scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57) > at > scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66) > at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76) > at scala.collection.immutable.List.foreach(List.scala:392) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76) > at > org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127) > at org.apache.spark.sql.catal
[jira] [Commented] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *
[ https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16936778#comment-16936778 ] fengchaoge commented on SPARK-28990: @[~726575...@qq.com] hello daile ,Can you send a link? thanks > SparkSQL invalid call to toAttribute on unresolved object, tree: * > -- > > Key: SPARK-28990 > URL: https://issues.apache.org/jira/browse/SPARK-28990 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.3 >Reporter: fengchaoge >Priority: Major > > SparkSQL create table as select from one table which may not exists throw > exceptions like: > {code} > org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to > toAttribute on unresolved object, tree: > {code} > This is not friendly, spark user may have no idea about what's wrong. > Simple sql can reproduce it,like this: > {code} > spark-sql (default)> create table default.spark as select * from default.dual; > {code} > {code} > 2019-09-05 16:27:24,127 INFO (main) [Logging.scala:logInfo(54)] - Parsing > command: create table default.spark as select * from default.dual > 2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in > [create table default.spark as select * from default.dual] > org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to > toAttribute on unresolved object, tree: * > at > org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245) > at > org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52) > at > org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.immutable.List.foreach(List.scala:392) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.immutable.List.map(List.scala:296) > at > org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52) > at > org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160) > at > org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108) > at > org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29) > at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148) > at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84) > at > scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57) > at > scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66) > at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76) > at scala.collection.immutable.List.foreach(List.scala:392) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76) > at > org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127) > at org.apache.spark.sql.catalyst.analysi
[jira] [Commented] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *
[ https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16933163#comment-16933163 ] daile commented on SPARK-28990: --- It seems to have been solved in 3.0 > SparkSQL invalid call to toAttribute on unresolved object, tree: * > -- > > Key: SPARK-28990 > URL: https://issues.apache.org/jira/browse/SPARK-28990 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.3 >Reporter: fengchaoge >Priority: Major > > SparkSQL create table as select from one table which may not exists throw > exceptions like: > {code} > org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to > toAttribute on unresolved object, tree: > {code} > This is not friendly, spark user may have no idea about what's wrong. > Simple sql can reproduce it,like this: > {code} > spark-sql (default)> create table default.spark as select * from default.dual; > {code} > {code} > 2019-09-05 16:27:24,127 INFO (main) [Logging.scala:logInfo(54)] - Parsing > command: create table default.spark as select * from default.dual > 2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed in > [create table default.spark as select * from default.dual] > org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to > toAttribute on unresolved object, tree: * > at > org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245) > at > org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52) > at > org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.immutable.List.foreach(List.scala:392) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.immutable.List.map(List.scala:296) > at > org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52) > at > org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160) > at > org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108) > at > org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29) > at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73) > at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29) > at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148) > at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84) > at > scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57) > at > scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66) > at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76) > at scala.collection.immutable.List.foreach(List.scala:392) > at > org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76) > at > org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127) > at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:12
[jira] [Commented] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *
[ https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923462#comment-16923462 ] fengchaoge commented on SPARK-28990: Thank you very much, I have some idea about it,Analyzer's executeAndCheck method throw UnresolvedException which is not be captured > SparkSQL invalid call to toAttribute on unresolved object, tree: * > -- > > Key: SPARK-28990 > URL: https://issues.apache.org/jira/browse/SPARK-28990 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.3 > Environment: Any >Reporter: fengchaoge >Priority: Major > > h6. SparkSQL create table as select from one table which may not exists throw > exceptions like "*org.apache.spark.sql.catalyst.analysis.UnresolvedException: > Invalid call to toAttribute on unresolved object, tree: **" ,this is not > friendly,spark user may have no idea about what's wrong. > h6. Simple sql can reproduce it,like this: > ^create table default.spark as select * from default.dual;^ > ~spark-sql (default)> create table default.spark as select * from > default.dual;~ > ~2019-09-05 16:27:24,127 INFO (main) [Logging.scala:logInfo(54)] - Parsing > command: create table default.spark as select * from default.dual~ > ~2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed > in [create table default.spark as select * from default.dual]~ > ~org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to > toAttribute on unresolved object, tree: *~ > ~at > org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)~ > ~at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)~ > ~at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)~ > ~at scala.collection.immutable.List.foreach(List.scala:392)~ > ~at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)~ > ~at scala.collection.immutable.List.map(List.scala:296)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52)~ > ~at > org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160)~ > ~at > org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)~ > ~at > org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)~ > ~at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148)~ > ~at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147)~ > ~at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)~ > ~at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)~ > ~at > scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)~ > ~at > scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)~ > ~at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48)~ > ~at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)~ > ~at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)~ > ~at scala.collection.immutable.List.foreach(List.scala:392)~ > ~at > org.apache.spark.sql.catalyst.
[jira] [Commented] (SPARK-28990) SparkSQL invalid call to toAttribute on unresolved object, tree: *
[ https://issues.apache.org/jira/browse/SPARK-28990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16923231#comment-16923231 ] Shivu Sondur commented on SPARK-28990: -- i will check this issue > SparkSQL invalid call to toAttribute on unresolved object, tree: * > -- > > Key: SPARK-28990 > URL: https://issues.apache.org/jira/browse/SPARK-28990 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.4.3 > Environment: Any >Reporter: fengchaoge >Priority: Major > Fix For: 2.4.4 > > > h6. SparkSQL create table as select from one table which may not exists throw > exceptions like "*org.apache.spark.sql.catalyst.analysis.UnresolvedException: > Invalid call to toAttribute on unresolved object, tree: **" ,this is not > friendly,spark user may have no idea about what's wrong. > h6. Simple sql can reproduce it,like this: > ^create table default.spark as select * from default.dual;^ > ~spark-sql (default)> create table default.spark as select * from > default.dual;~ > ~2019-09-05 16:27:24,127 INFO (main) [Logging.scala:logInfo(54)] - Parsing > command: create table default.spark as select * from default.dual~ > ~2019-09-05 16:27:24,772 ERROR (main) [Logging.scala:logError(91)] - Failed > in [create table default.spark as select * from default.dual]~ > ~org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to > toAttribute on unresolved object, tree: *~ > ~at > org.apache.spark.sql.catalyst.analysis.Star.toAttribute(unresolved.scala:245)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.Project$$anonfun$output$1.apply(basicLogicalOperators.scala:52)~ > ~at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)~ > ~at > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)~ > ~at scala.collection.immutable.List.foreach(List.scala:392)~ > ~at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)~ > ~at scala.collection.immutable.List.map(List.scala:296)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.Project.output(basicLogicalOperators.scala:52)~ > ~at > org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:160)~ > ~at > org.apache.spark.sql.hive.HiveAnalysis$$anonfun$apply$3.applyOrElse(HiveStrategies.scala:148)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)~ > ~at > org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73)~ > ~at > org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)~ > ~at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:148)~ > ~at org.apache.spark.sql.hive.HiveAnalysis$.apply(HiveStrategies.scala:147)~ > ~at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)~ > ~at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)~ > ~at > scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)~ > ~at > scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)~ > ~at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:48)~ > ~at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)~ > ~at > org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)~ > ~at scala.collection.immutable.List.foreach(List.scala:392)~ > ~at > org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)~ > ~at > org.apache.sp