rajat0807 commented on issue #1581:
URL: https://github.com/apache/hudi/issues/1581#issuecomment-849820311


   AWS EMR : 5.32.0
   Hudi Version : 0.8.0
   Spark Version : 2.4.7
   Hive Version : 2.3.7
   
   While trying to add a new column to a partitioned table, with the above 
configurations, it produces the following error : 
   
   > Caused by: org.apache.hudi.hive.HoodieHiveSyncException: Failed in 
executing SQL ALTER TABLE `delta_test`.`HUDIv3` REPLACE 
COLUMNS(`_hoodie_commit_time` string, `_hoodie_commit_seqno` string, 
`_hoodie_record_key` string, `_hoodie_partition_path` string, 
`_hoodie_file_name` string, `firstName` string, `date` string, `age` bigint, 
`level` string, `isGraduated` boolean, `newCol` string ) cascade
   >   at 
org.apache.hudi.hive.HoodieHiveClient.updateHiveSQL(HoodieHiveClient.java:369)
   >   at 
org.apache.hudi.hive.HoodieHiveClient.updateTableDefinition(HoodieHiveClient.java:251)
   >   at org.apache.hudi.hive.HiveSyncTool.syncSchema(HiveSyncTool.java:206)
   >   at 
org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:154)
   >   at 
org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:108)
   >   ... 86 more
   > Caused by: org.apache.hive.service.cli.HiveSQLException: Error while 
compiling statement: FAILED: NullPointerException null
   >   at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:256)
   >   at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:242)
   >   at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:254)
   >   at 
org.apache.hudi.hive.HoodieHiveClient.updateHiveSQL(HoodieHiveClient.java:367)
   >   ... 90 more
   > Caused by: org.apache.hive.service.cli.HiveSQLException: Error while 
compiling statement: FAILED: NullPointerException null
   >   at 
org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:380)
   >   at 
org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:206)
   >   at 
org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:290)
   >   at 
org.apache.hive.service.cli.operation.Operation.run(Operation.java:320)
   >   at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:530)
   >   at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:517)
   >   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   >   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   >   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   >   at java.lang.reflect.Method.invoke(Method.java:498)
   >   at 
org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
   >   at 
org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
   >   at 
org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
   >   at java.security.AccessController.doPrivileged(Native Method)
   >   at javax.security.auth.Subject.doAs(Subject.java:422)
   >   at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
   >   at 
org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
   >   at com.sun.proxy.$Proxy41.executeStatementAsync(Unknown Source)
   >   at 
org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:310)
   >   at 
org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:530)
   >   at 
org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1437)
   >   at 
org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1422)
   >   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
   >   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
   >   at 
org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
   >   at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
   >   ... 3 more
   > Caused by: java.lang.NullPointerException: null
   >   at 
org.apache.hadoop.hive.metastore.Warehouse.makePartName(Warehouse.java:592)
   >   at 
org.apache.hadoop.hive.metastore.Warehouse.makePartName(Warehouse.java:534)
   >   at 
org.apache.hadoop.hive.ql.metadata.Partition.getName(Partition.java:192)
   >   at org.apache.hadoop.hive.ql.hooks.Entity.doComputeName(Entity.java:340)
   >   at org.apache.hadoop.hive.ql.hooks.Entity.computeName(Entity.java:330)
   >   at org.apache.hadoop.hive.ql.hooks.Entity.<init>(Entity.java:205)
   >   at 
org.apache.hadoop.hive.ql.hooks.WriteEntity.<init>(WriteEntity.java:105)
   >   at 
org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.addInputsOutputsAlterTable(DDLSemanticAnalyzer.java:1502)
   >   at 
org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.addInputsOutputsAlterTable(DDLSemanticAnalyzer.java:1479)
   >   at 
org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeAlterTableModifyCols(DDLSemanticAnalyzer.java:2718)
   >   at 
org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:281)
   >   at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
   >   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
   >   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
   >   at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1295)
   >   at 
org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:204)
   >   ... 27 more
   
   
   As per https://jira.apache.org/jira/browse/HUDI-874, the issue of alter 
table cascade should not persist with EMR 5.32.0. Any possible solutions to 
this?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to