vingov commented on issue #4429:
URL: https://github.com/apache/hudi/issues/4429#issuecomment-1001748039


   @xushiyan - Thanks for the reply, but it doesn't work even with Spark 3.0.2, 
but the same statement works with Hudi 0.9.0 version.
   
   ```
   spark-sql> create table test_demo_partitioned_cow (
            >                                            id bigint,
            >                                            name string,
            >                                            price double,
            >                                            ts bigint,
            >                                            dt string
            > ) using hudi
            >     partitioned by (dt)
            >     options (
            >                 type = 'cow',
            >                 primaryKey = 'id',
            >                 preCombineField = 'ts',
            >                 hoodie.datasource.write.drop.partition.columns = 
'true'
            >             );
   ANTLR Runtime version 4.8 used for parser compilation does not match the 
current runtime version 4.7.1ANTLR Runtime version 4.8 used for parser 
compilation does not match the current runtime version 4.7.121/12/27 20:48:42 
INFO HiveMetaStore: 0: get_database: default
   21/12/27 20:48:42 INFO audit: ugi=root       ip=unknown-ip-addr      
cmd=get_database: default
   21/12/27 20:48:42 INFO HiveMetaStore: 0: get_database: default
   21/12/27 20:48:42 INFO audit: ugi=root       ip=unknown-ip-addr      
cmd=get_database: default
   21/12/27 20:48:42 INFO HiveMetaStore: 0: get_table : db=default 
tbl=test_demo_partitioned_cow
   21/12/27 20:48:42 INFO audit: ugi=root       ip=unknown-ip-addr      
cmd=get_table : db=default tbl=test_demo_partitioned_cow
   21/12/27 20:48:43 INFO HiveMetaStore: 0: get_database: default
   21/12/27 20:48:43 INFO audit: ugi=root       ip=unknown-ip-addr      
cmd=get_database: default
   21/12/27 20:48:43 INFO HiveMetaStore: 0: get_database: default
   21/12/27 20:48:43 INFO audit: ugi=root       ip=unknown-ip-addr      
cmd=get_database: default
   21/12/27 20:48:43 INFO HiveMetaStore: 0: get_table : db=default 
tbl=test_demo_partitioned_cow
   21/12/27 20:48:43 INFO audit: ugi=root       ip=unknown-ip-addr      
cmd=get_table : db=default tbl=test_demo_partitioned_cow
   21/12/27 20:48:49 ERROR SparkSQLDriver: Failed in [create table 
test_demo_partitioned_cow (
                                              id bigint,
                                              name string,
                                              price double,
                                              ts bigint,
                                              dt string
   ) using hudi
       partitioned by (dt)
       options (
                   type = 'cow',
                   primaryKey = 'id',
                   preCombineField = 'ts',
                   hoodie.datasource.write.drop.partition.columns = 'true'
               )]
   java.lang.NoSuchMethodError: 
'org.apache.spark.sql.catalyst.expressions.AttributeSet 
org.apache.spark.sql.catalyst.plans.logical.Command.producedAttributes$(org.apache.spark.sql.catalyst.plans.logical.Command)'
        at 
org.apache.spark.sql.hudi.command.CreateHoodieTableCommand.producedAttributes(CreateHoodieTableCommand.scala:48)
        at 
org.apache.spark.sql.catalyst.plans.QueryPlan.references$lzycompute(QueryPlan.scala:72)
        at 
org.apache.spark.sql.catalyst.plans.QueryPlan.references(QueryPlan.scala:72)
        at 
org.apache.spark.sql.catalyst.plans.QueryPlan.missingInput(QueryPlan.scala:78)
        at 
org.apache.spark.sql.catalyst.plans.QueryPlan.statePrefix(QueryPlan.scala:283)
        at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.statePrefix(LogicalPlan.scala:65)
        at 
org.apache.spark.sql.catalyst.plans.QueryPlan.simpleString(QueryPlan.scala:285)
        at 
org.apache.spark.sql.catalyst.plans.QueryPlan.verboseString(QueryPlan.scala:287)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.generateTreeString(TreeNode.scala:677)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.treeString(TreeNode.scala:599)
        at 
org.apache.spark.sql.catalyst.plans.QueryPlan$.append(QueryPlan.scala:478)
        at 
org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$writePlans(QueryExecution.scala:200)
        at 
org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:212)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:95)
        at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
        at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3616)
        at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229)
        at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
        at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
        at 
org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:607)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:602)
        at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:63)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:377)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:496)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:490)
        at scala.collection.Iterator.foreach(Iterator.scala:941)
        at scala.collection.Iterator.foreach$(Iterator.scala:941)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
        at scala.collection.IterableLike.foreach(IterableLike.scala:74)
        at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:490)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:282)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown 
Source)
        at java.base/java.lang.reflect.Method.invoke(Unknown Source)
        at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
        at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
        at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   java.lang.NoSuchMethodError: 
'org.apache.spark.sql.catalyst.expressions.AttributeSet 
org.apache.spark.sql.catalyst.plans.logical.Command.producedAttributes$(org.apache.spark.sql.catalyst.plans.logical.Command)'
        at 
org.apache.spark.sql.hudi.command.CreateHoodieTableCommand.producedAttributes(CreateHoodieTableCommand.scala:48)
        at 
org.apache.spark.sql.catalyst.plans.QueryPlan.references$lzycompute(QueryPlan.scala:72)
        at 
org.apache.spark.sql.catalyst.plans.QueryPlan.references(QueryPlan.scala:72)
        at 
org.apache.spark.sql.catalyst.plans.QueryPlan.missingInput(QueryPlan.scala:78)
        at 
org.apache.spark.sql.catalyst.plans.QueryPlan.statePrefix(QueryPlan.scala:283)
        at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.statePrefix(LogicalPlan.scala:65)
        at 
org.apache.spark.sql.catalyst.plans.QueryPlan.simpleString(QueryPlan.scala:285)
        at 
org.apache.spark.sql.catalyst.plans.QueryPlan.verboseString(QueryPlan.scala:287)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.generateTreeString(TreeNode.scala:677)
        at 
org.apache.spark.sql.catalyst.trees.TreeNode.treeString(TreeNode.scala:599)
        at 
org.apache.spark.sql.catalyst.plans.QueryPlan$.append(QueryPlan.scala:478)
        at 
org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$writePlans(QueryExecution.scala:200)
        at 
org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:212)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:95)
        at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
        at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3616)
        at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229)
        at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
        at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
        at 
org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:607)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:602)
        at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:63)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:377)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:496)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:490)
        at scala.collection.Iterator.foreach(Iterator.scala:941)
        at scala.collection.Iterator.foreach$(Iterator.scala:941)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
        at scala.collection.IterableLike.foreach(IterableLike.scala:74)
        at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:490)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:282)
        at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown 
Source)
        at java.base/java.lang.reflect.Method.invoke(Unknown Source)
        at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
        at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
        at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   
   spark-sql>
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to