dongkelun commented on pull request #3843:
URL: https://github.com/apache/hudi/pull/3843#issuecomment-954356834


   Hello, according to the jar compiled by the latest code of the master 
branch, the following exception is thrown when running spark SQL. I don't know 
whether it has anything to do with this pr. is there any missing jar package?
   
   It cannot be solved when I copied hbase-server-1.2.3.jar to SPARK_home/jars 
path. If set hoodie.metadata.enable=false, this exception will not be thrown. I 
don't know how to solve it by default?
   
   ```scala
   21/10/28 20:04:47 INFO AbstractHoodieLogRecordScanner: Scanning log file 
HoodieLogFile{pathStr='hdfs://cluster1/warehouse/tablespace/managed/hive/test_hudi_table/.hoodie/metadata/files/.files-0000_20211028193934.log.1_0-0-0',
 fileLen=0}
   21/10/28 20:04:47 INFO AbstractHoodieLogRecordScanner: Reading a delete 
block from file 
hdfs://cluster1/warehouse/tablespace/managed/hive/test_hudi_table/.hoodie/metadata/files/.files-0000_20211028193934.log.1_0-0-0
   21/10/28 20:04:47 INFO HoodieLogFormatReader: Moving to the next reader for 
logfile 
HoodieLogFile{pathStr='hdfs://cluster1/warehouse/tablespace/managed/hive/test_hudi_table/.hoodie/metadata/files/.files-0000_20211028193934.log.1_0-52-1831',
 fileLen=0}
   21/10/28 20:04:47 INFO AbstractHoodieLogRecordScanner: Scanning log file 
HoodieLogFile{pathStr='hdfs://cluster1/warehouse/tablespace/managed/hive/test_hudi_table/.hoodie/metadata/files/.files-0000_20211028193934.log.1_0-52-1831',
 fileLen=0}
   21/10/28 20:04:47 INFO AbstractHoodieLogRecordScanner: Reading a data block 
from file 
hdfs://cluster1/warehouse/tablespace/managed/hive/test_hudi_table/.hoodie/metadata/files/.files-0000_20211028193934.log.1_0-52-1831
 at instant 20211028193934
   21/10/28 20:04:47 INFO AbstractHoodieLogRecordScanner: Merging the final 
data blocks
   21/10/28 20:04:47 INFO AbstractHoodieLogRecordScanner: Number of remaining 
logblocks to merge 2
   21/10/28 20:04:48 INFO AbstractHoodieLogRecordScanner: Number of remaining 
logblocks to merge 1
   21/10/28 20:04:48 INFO CacheConfig: Allocating onheap LruBlockCache 
size=368.20 MB, blockSize=64 KB
   21/10/28 20:04:48 INFO CacheConfig: Created cacheConfig: 
blockCache=LruBlockCache{blockCount=0, currentSize=277.15 KB, freeSize=367.93 
MB, maxSize=368.20 MB, heapSize=277.15 KB, minSize=349.79 MB, minFactor=0.95, 
multiSize=174.90 MB, multiFactor=0.5, singleSize=87.45 MB, singleFactor=0.25}, 
cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, 
cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, 
prefetchOnOpen=false
   21/10/28 20:04:48 ERROR SparkSQLDriver: Failed in [update test_hudi_table 
set price =22222,NAME='TEST_UPPER77777' where id = 2]
   org.spark_project.guava.util.concurrent.ExecutionError: 
java.lang.NoSuchMethodError: 
org.apache.hadoop.hbase.io.hfile.HFile.createReader(Lorg/apache/hadoop/fs/FileSystem;Lorg/apache/hadoop/fs/Path;Lorg/apache/hadoop/hbase/io/FSDataInputStreamWrapper;JLorg/apache/hadoop/hbase/io/hfile/CacheConfig;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/hbase/io/hfile/HFile$Reader;
           at 
org.spark_project.guava.cache.LocalCache$Segment.get(LocalCache.java:2261)
           at org.spark_project.guava.cache.LocalCache.get(LocalCache.java:4000)
           at 
org.spark_project.guava.cache.LocalCache$LocalManualCache.get(LocalCache.java:4789)
           at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.getCachedPlan(SessionCatalog.scala:141)
           at 
org.apache.spark.sql.execution.datasources.FindDataSourceTable.org$apache$spark$sql$execution$datasources$FindDataSourceTable$$readDataSourceTable(DataSourceStrategy.scala:227)
           at 
org.apache.spark.sql.execution.datasources.FindDataSourceTable$$anonfun$apply$2.applyOrElse(DataSourceStrategy.scala:264)
           at 
org.apache.spark.sql.execution.datasources.FindDataSourceTable$$anonfun$apply$2.applyOrElse(DataSourceStrategy.scala:255)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
           at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:69)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$apply$6.apply(AnalysisHelper.scala:113)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$apply$6.apply(AnalysisHelper.scala:113)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:328)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:186)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:326)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:113)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$apply$6.apply(AnalysisHelper.scala:113)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$apply$6.apply(AnalysisHelper.scala:113)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:328)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:186)
           at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:326)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:113)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73)
           at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)
           at 
org.apache.spark.sql.execution.datasources.FindDataSourceTable.apply(DataSourceStrategy.scala:255)
           at 
org.apache.spark.sql.execution.datasources.FindDataSourceTable.apply(DataSourceStrategy.scala:223)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)
           at 
scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:124)
           at scala.collection.immutable.List.foldLeft(List.scala:84)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)
           at scala.collection.immutable.List.foreach(List.scala:392)
           at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)
           at 
org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)
           at 
org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:121)
           at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:106)
           at 
org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)
           at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)
           at 
org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
           at 
org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:58)
           at 
org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:56)
           at 
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:48)
           at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:78)
           at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
           at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:62)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:371)
           at 
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:274)
           at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
           at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
           at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.lang.reflect.Method.invoke(Method.java:498)
           at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
           at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845)
           at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
           at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
           at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
           at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
           at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
           at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to