[ 
https://issues.apache.org/jira/browse/SPARK-44069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Jie updated SPARK-44069:
-----------------------------
    Description: 
https://github.com/LuciferYang/spark/actions/runs/5274544416/jobs/9541917589  
(was: {code:java}
./build/mvn  -DskipTests -Pyarn -Pmesos -Pkubernetes -Pvolcano -Phive 
-Phive-thriftserver -Phadoop-cloud -Pspark-ganglia-lgpl  clean install
 build/mvn test -pl repl{code}
 
{code:java}
ReplSuite:
17500Spark context available as 'sc' (master = local, app id = 
local-1686829049116).
17501Spark session available as 'spark'.
17502- SPARK-15236: use Hive catalog *** FAILED ***
17503  isContain was true Interpreter output contained 'Exception':
17504  Welcome to
17505        ____              __
17506       / __/__  ___ _____/ /__
17507      _\ \/ _ \/ _ `/ __/  '_/
17508     /___/ .__/\_,_/_/ /_/\_\   version 3.5.0-SNAPSHOT
17509        /_/
17510           
17511  Using Scala version 2.12.17 (OpenJDK 64-Bit Server VM, Java 1.8.0_372)
17512  Type in expressions to have them evaluated.
17513  Type :help for more information.
17514  
17515  scala> 
17516  scala> java.lang.NoClassDefFoundError: 
org/sparkproject/guava/cache/CacheBuilder
17517    at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.<init>(SessionCatalog.scala:197)
17518    at 
org.apache.spark.sql.internal.BaseSessionStateBuilder.catalog$lzycompute(BaseSessionStateBuilder.scala:153)
17519    at 
org.apache.spark.sql.internal.BaseSessionStateBuilder.catalog(BaseSessionStateBuilder.scala:152)
17520    at 
org.apache.spark.sql.internal.BaseSessionStateBuilder.v2SessionCatalog$lzycompute(BaseSessionStateBuilder.scala:166)
17521    at 
org.apache.spark.sql.internal.BaseSessionStateBuilder.v2SessionCatalog(BaseSessionStateBuilder.scala:166)
17522    at 
org.apache.spark.sql.internal.BaseSessionStateBuilder.catalogManager$lzycompute(BaseSessionStateBuilder.scala:168)
17523    at 
org.apache.spark.sql.internal.BaseSessionStateBuilder.catalogManager(BaseSessionStateBuilder.scala:168)
17524    at 
org.apache.spark.sql.internal.BaseSessionStateBuilder$$anon$1.<init>(BaseSessionStateBuilder.scala:185)
17525    at 
org.apache.spark.sql.internal.BaseSessionStateBuilder.analyzer(BaseSessionStateBuilder.scala:185)
17526    at 
org.apache.spark.sql.internal.BaseSessionStateBuilder.$anonfun$build$2(BaseSessionStateBuilder.scala:373)
17527    at 
org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:92)
17528    at 
org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:92)
17529    at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:76)
17530    at 
org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
17531    at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:202)
17532    at 
org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:529)
17533    at 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:202)
17534    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
17535    at 
org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:201)
17536    at 
org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:76)
17537    at 
org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:74)
17538    at 
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:66)
17539    at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
17540    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
17541    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
17542    at 
org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:640)
17543    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
17544    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:630)
17545    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:671)
17546    ... 94 elided
17547  Caused by: java.lang.ClassNotFoundException: 
org.sparkproject.guava.cache.CacheBuilder
17548    at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
17549    at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
17550    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
17551    at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
17552    ... 123 more
17553  
17554  scala>      | 
17555  scala> :quit (ReplSuite.scala:83)
17556Spark context available as 'sc' (master = local, app id = 
local-1686829054261).
17557Spark session available as 'spark'.
17558- SPARK-15236: use in-memory catalog
17559Spark context available as 'sc' (master = local, app id = 
local-1686829056083).
17560Spark session available as 'spark'.
17561- broadcast vars
17562Spark context available as 'sc' (master = local, app id = 
local-1686829059606).
17563Spark session available as 'spark'.
17564- line wrapper only initialized once when used as encoder outer scope
17565Spark context available as 'sc' (master = local-cluster[1,1,1024], app id 
= app-20230615043742-0000).
17566Spark session available as 'spark'.
17567
17568// Exiting paste mode, now interpreting.
17569
17570- define case class and create Dataset together with paste mode *** FAILED 
***
17571  isContain was true Interpreter output contained 'Exception':
17572  Welcome to
17573        ____              __
17574       / __/__  ___ _____/ /__
17575      _\ \/ _ \/ _ `/ __/  '_/
17576     /___/ .__/\_,_/_/ /_/\_\   version 3.5.0-SNAPSHOT
17577        /_/
17578           
17579  Using Scala version 2.12.17 (OpenJDK 64-Bit Server VM, Java 1.8.0_372)
17580  Type in expressions to have them evaluated.
17581  Type :help for more information.
17582  
17583  scala> // Entering paste mode (ctrl-D to finish)
17584  
17585  java.lang.NoClassDefFoundError: 
org/sparkproject/guava/util/concurrent/AtomicLongMap
17586    at 
org.apache.spark.sql.catalyst.rules.QueryExecutionMetering.<init>(QueryExecutionMetering.scala:27)
17587    at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$.<init>(RuleExecutor.scala:31)
17588    at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$.<clinit>(RuleExecutor.scala)
17589    at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:192)
17590    at 
org.apache.spark.sql.catalyst.expressions.codegen.GenerateUnsafeProjection$.$anonfun$canonicalize$1(GenerateUnsafeProjection.scala:319)
17591    at scala.collection.immutable.List.map(List.scala:293)
17592    at 
org.apache.spark.sql.catalyst.expressions.codegen.GenerateUnsafeProjection$.canonicalize(GenerateUnsafeProjection.scala:319)
17593    at 
org.apache.spark.sql.catalyst.expressions.codegen.GenerateUnsafeProjection$.generate(GenerateUnsafeProjection.scala:327)
17594    at 
org.apache.spark.sql.catalyst.expressions.UnsafeProjection$.createCodeGeneratedObject(Projection.scala:124)
17595    at 
org.apache.spark.sql.catalyst.expressions.UnsafeProjection$.createCodeGeneratedObject(Projection.scala:120)
17596    at 
org.apache.spark.sql.catalyst.expressions.CodeGeneratorWithInterpretedFallback.createObject(CodeGeneratorWithInterpretedFallback.scala:51)
17597    at 
org.apache.spark.sql.catalyst.expressions.UnsafeProjection$.create(Projection.scala:151)
17598    at 
org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$Serializer.apply(ExpressionEncoder.scala:198)
17599    at 
org.apache.spark.sql.SparkSession.$anonfun$createDataset$1(SparkSession.scala:483)
17650- SPARK-2576 importing implicits
17651- Datasets and encoders *** FAILED ***
17652  isContain was true Interpreter output contained 'error:':
17653  
17654  scala> import org.apache.spark.sql.functions._
17655  
17656  scala> import org.apache.spark.sql.{Encoder, Encoders}
17657  
17658  scala> import org.apache.spark.sql.expressions.Aggregator
17659  
17660  scala> import org.apache.spark.sql.TypedColumn
17661  
17662  scala>      |      |      |      |      |      |      | simpleSum: 
org.apache.spark.sql.TypedColumn[Int,Int] = $anon$1(boundreference() AS value, 
value, unresolveddeserializer(assertnotnull(upcast(getcolumnbyordinal(0, 
IntegerType), IntegerType, - root class: "int")), value#9), boundreference() AS 
value)
17663  
17664  scala> 
17665  scala> java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.spark.sql.catalyst.rules.RuleExecutor$
17666    at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:192)
17667    at 
org.apache.spark.sql.catalyst.expressions.codegen.GenerateUnsafeProjection$.$anonfun$canonicalize$1(GenerateUnsafeProjection.scala:319)
17668    at scala.collection.immutable.List.map(List.scala:293)
17669    at 
org.apache.spark.sql.catalyst.expressions.codegen.GenerateUnsafeProjection$.canonicalize(GenerateUnsafeProjection.scala:319)
17670    at 
org.apache.spark.sql.catalyst.expressions.codegen.GenerateUnsafeProjection$.generate(GenerateUnsafeProjection.scala:327)
17671    at 
org.apache.spark.sql.catalyst.expressions.UnsafeProjection$.createCodeGeneratedObject(Projection.scala:124)
17672    at 
org.apache.spark.sql.catalyst.expressions.UnsafeProjection$.createCodeGeneratedObject(Projection.scala:120)
17673    at 
org.apache.spark.sql.catalyst.expressions.CodeGeneratorWithInterpretedFallback.createObject(CodeGeneratorWithInterpretedFallback.scala:51)
17674    at 
org.apache.spark.sql.catalyst.expressions.UnsafeProjection$.create(Projection.scala:151)
17675    at 
org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$Serializer.apply(ExpressionEncoder.scala:198)
17676    at 
org.apache.spark.sql.SparkSession.$anonfun$createDataset$1(SparkSession.scala:483)
17677    at scala.collection.immutable.List.map(List.scala:293)
17678    at 
org.apache.spark.sql.SparkSession.createDataset(SparkSession.scala:483)
17679    at org.apache.spark.sql.SQLContext.createDataset(SQLContext.scala:354)
17680    at 
org.apache.spark.sql.SQLImplicits.localSeqToDatasetHolder(SQLImplicits.scala:244)
17681    ... 39 elided
17682  
17683  scala> <console>:33: error: not found: value ds
17684         ds.select(simpleSum).collect
17685         ^
17686  
17687  scala>      | _result_1686829100269: Int = 1
17688  
17689  scala> (SingletonReplSuite.scala:106) {code})

> maven test ReplSuite failed
> ---------------------------
>
>                 Key: SPARK-44069
>                 URL: https://issues.apache.org/jira/browse/SPARK-44069
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 3.5.0
>            Reporter: Yang Jie
>            Priority: Major
>
> https://github.com/LuciferYang/spark/actions/runs/5274544416/jobs/9541917589



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to