[ 
https://issues.apache.org/jira/browse/AMBARI-23043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandor Molnar reopened AMBARI-23043:
------------------------------------

 

[~vbrodetskyi]

Your change seems to break a Python unit test on trunk; see this Jenkins job: 
[https://builds.apache.org/job/Ambari-trunk-Commit/8768/consoleFull]

Could you please fix it ASAP?

Thanks!

cc: [~rlevas]

> 'Table or view not found error' with livy/livy2 interpreter on upgraded 
> cluster to Fenton-M30
> ---------------------------------------------------------------------------------------------
>
>                 Key: AMBARI-23043
>                 URL: https://issues.apache.org/jira/browse/AMBARI-23043
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-server
>            Reporter: Vitaly Brodetskyi
>            Assignee: Vitaly Brodetskyi
>            Priority: Critical
>              Labels: pull-request-available
>             Fix For: 2.6.2
>
>          Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The test has been performed as below:
> CentOS6 + Ambari-2.5.1 + HDP-2.6.1 -> AU to Ambari-2.6.2 -> Full EU to 
> HDP-2.6.5.0-74 (Fenton-M30) -> Run stack tests
> I see that with livy2 interpreter, anytime we register a temporary view or 
> table - the corresponding query on that table will fail with 'Table or view 
> not found error'
> {code:java}
> org.apache.spark.sql.AnalysisException: Table or view not found: word_counts; 
> line 2 pos 24
>  at 
> org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveRelations$$lookupTableFromCatalog(Analyzer.scala:649)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.resolveRelation(Analyzer.scala:601)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:631)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:624)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:62)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$resolveOperators$1.apply(LogicalPlan.scala:62)
>  at 
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:61)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan$$anonfun$1.apply(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
>  at 
> org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:59)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:624)
>  at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.apply(Analyzer.scala:570)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:85)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:82)
>  at 
> scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
>  at scala.collection.immutable.List.foldLeft(List.scala:84)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:82)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:74)
>  at scala.collection.immutable.List.foreach(List.scala:381)
>  at 
> org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:74)
>  at 
> org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:69)
>  at 
> org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:67)
>  at 
> org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:50)
>  at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:67)
>  at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:637)
>  ... 50 elided
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to