asheeshgarg commented on issue #1787:
URL: https://github.com/apache/hudi/issues/1787#issuecomment-659690154
@bvaradar @bhasudha I tried using following
"hoodie.datasource.hive_sync.use_jdbc":False,
"hoodie.datasource.hive_sync.enable":True,
My spark is configured already with thrifturl of metastore for hive. Does
the hudi will use the thrift instance as we I have disabled the jdbc?
I get the error
An error occurred while calling o175.save.\n:
java.lang.NoClassDefFoundError: org/json/JSONException\n\tat
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeCreateTable(SemanticAnalyzer.java:10847)\n\tat
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:10047)\n\tat
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10128)\n\tat
org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:209)\n\tat
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227)\n\tat
org.apache.hadoop.hive.ql.Driver.compile(Driver.java:424)\n\tat
org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308)\n\tat
org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1122)\n\tat
org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1170)\n\tat
org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)\n\tat
org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)\n\tat
org.apache.hudi.hive.HoodieHiveClient.updateHiveSQLs(HoodieHiveClient.java:515)\n\tat
org.apache.hudi.hive.HoodieHiveClient.updateHiveSQLUsingHiveDriver(HoodieHiveClient.java:498)\n\tat
org.apache.hudi.hive.HoodieHiveClient.updateHiveSQL(HoodieHiveClient.java:488)\n\tat
org.apache.hudi.hive.HoodieHiveClient.createTable(HoodieHiveClient.java:273)\n\tat
org.apache.hudi.hive.HiveSyncTool.syncSchema(HiveSyncTool.java:146)\n\tat
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]