[ 
https://issues.apache.org/jira/browse/HUDI-2089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17373778#comment-17373778
 ] 

ASF GitHub Bot commented on HUDI-2089:
--------------------------------------

nsivabalan commented on pull request #3182:
URL: https://github.com/apache/hudi/pull/3182#issuecomment-873254058


   There are some compilation errors. Can you please check that. 
   
   ```
   [WARNING]  Expected all dependencies to require Scala version: 2.11.12
   [WARNING]  org.apache.hudi:hudi-spark_2.11:0.9.0-SNAPSHOT requires scala 
version: 2.11.12
   [WARNING]  org.apache.hudi:hudi-spark-client:0.9.0-SNAPSHOT requires scala 
version: 2.11.12
   [WARNING]  org.apache.hudi:hudi-spark-common_2.11:0.9.0-SNAPSHOT requires 
scala version: 2.11.12
   [WARNING]  org.apache.hudi:hudi-spark2_2.11:0.9.0-SNAPSHOT requires scala 
version: 2.11.12
   [WARNING]  com.fasterxml.jackson.module:jackson-module-scala_2.11:2.6.7.1 
requires scala version: 2.11.8
   [WARNING] Multiple versions of scala libraries detected!
   [INFO] 
/home/travis/build/apache/hudi/hudi-spark-datasource/hudi-spark/src/test/java:-1:
 info: compiling
   [INFO] 
/home/travis/build/apache/hudi/hudi-spark-datasource/hudi-spark/src/test/scala:-1:
 info: compiling
   [INFO] Compiling 31 source files to 
/home/travis/build/apache/hudi/hudi-spark-datasource/hudi-spark/target/test-classes
 at 1625219965712
   [ERROR] 
/home/travis/build/apache/hudi/hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/hudi/functional/HoodieSparkSqlWriterSuite.scala:645:
 error: overloaded method value option with alternatives:
   [ERROR]   (key: String,value: 
Double)org.apache.spark.sql.DataFrameWriter[org.apache.spark.sql.Row] <and>
   [ERROR]   (key: String,value: 
Long)org.apache.spark.sql.DataFrameWriter[org.apache.spark.sql.Row] <and>
   [ERROR]   (key: String,value: 
Boolean)org.apache.spark.sql.DataFrameWriter[org.apache.spark.sql.Row] <and>
   [ERROR]   (key: String,value: 
String)org.apache.spark.sql.DataFrameWriter[org.apache.spark.sql.Row]
   [ERROR]  cannot be applied to 
(org.apache.hudi.common.config.ConfigProperty[String], String)
   [ERROR]           .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY, 
tableType)
   [ERROR]            ^
   [ERROR] 
/home/travis/build/apache/hudi/hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/hudi/functional/HoodieSparkSqlWriterSuite.scala:660:
 error: overloaded method value option with alternatives:
   [ERROR]   (key: String,value: 
Double)org.apache.spark.sql.DataFrameWriter[org.apache.spark.sql.Row] <and>
   [ERROR]   (key: String,value: 
Long)org.apache.spark.sql.DataFrameWriter[org.apache.spark.sql.Row] <and>
   [ERROR]   (key: String,value: 
Boolean)org.apache.spark.sql.DataFrameWriter[org.apache.spark.sql.Row] <and>
   [ERROR]   (key: String,value: 
String)org.apache.spark.sql.DataFrameWriter[org.apache.spark.sql.Row]
   [ERROR]  cannot be applied to 
(org.apache.hudi.common.config.ConfigProperty[String], String)
   [ERROR]           .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY, 
tableType)
   [ERROR]            ^
   [WARNING] 
/home/travis/build/apache/hudi/hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/spark/sql/hudi/TestHoodieSqlBase.scala:56:
 warning: A try without a catch or finally is equivalent to putting its body in 
a block; no exceptions are handled.
   [WARNING]     try super.test(testName, testTags: _*)(try testFun finally {
   [WARNING]     ^
   [WARNING] one warning found
   [ERROR] two errors found
   [INFO] 
------------------------------------------------------------------------
   [INFO] Reactor Summary:
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


> fix the bug that metatable cannot support non_partition table
> -------------------------------------------------------------
>
>                 Key: HUDI-2089
>                 URL: https://issues.apache.org/jira/browse/HUDI-2089
>             Project: Apache Hudi
>          Issue Type: Bug
>          Components: Spark Integration
>    Affects Versions: 0.8.0
>         Environment: spark3.1.1
> hive3.1.1
> hadoop 3.1.1
>            Reporter: tao meng
>            Assignee: tao meng
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 0.9.0
>
>
> now, we found that when we enable metable for non_partition hudi table,  the 
> follow  error occur:
> org.apache.hudi.exception.HoodieMetadataException: Error syncing to metadata 
> table.org.apache.hudi.exception.HoodieMetadataException: Error syncing to 
> metadata table.
>  at 
> org.apache.hudi.client.SparkRDDWriteClient.syncTableMetadata(SparkRDDWriteClient.java:447)
>  at 
> org.apache.hudi.client.AbstractHoodieWriteClient.postCommit(AbstractHoodieWriteClient.java:433)
>  at 
> org.apache.hudi.client.AbstractHoodieWriteClient.commitStats(AbstractHoodieWriteClient.java:187)
> we use hudi 0.8, but we  also find this problem in latest code of hudi
> test step:
> val df = spark.range(0, 1000).toDF("keyid")
>  .withColumn("col3", expr("keyid"))
>  .withColumn("age", lit(1))
>  .withColumn("p", lit(2))
> df.write.format("hudi").
>  option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY, 
> DataSourceWriteOptions.COW_TABLE_TYPE_OPT_VAL).
>  option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, "col3").
>  option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, "keyid").
>  option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY, "").
>  option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY, 
> "org.apache.hudi.keygen.NonpartitionedKeyGenerator").
>  option(DataSourceWriteOptions.OPERATION_OPT_KEY, "insert").
>  option("hoodie.insert.shuffle.parallelism", "4").
>  option("hoodie.metadata.enable", "true").
>  option(HoodieWriteConfig.TABLE_NAME, "hoodie_test")
>  .mode(SaveMode.Overwrite).save(basePath)
> // upsert same record again
> df.write.format("hudi").
>  option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY, 
> DataSourceWriteOptions.COW_TABLE_TYPE_OPT_VAL).
>  option(DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY, "col3").
>  option(DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY, "keyid").
>  option(DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY, "").
>  option(DataSourceWriteOptions.KEYGENERATOR_CLASS_OPT_KEY, 
> "org.apache.hudi.keygen.NonpartitionedKeyGenerator").
>  option(DataSourceWriteOptions.OPERATION_OPT_KEY, "upsert").
>  option("hoodie.insert.shuffle.parallelism", "4").
>  option("hoodie.metadata.enable", "true").
>  option(HoodieWriteConfig.TABLE_NAME, "hoodie_test")
>  .mode(SaveMode.Append).save(basePath)
>  
> org.apache.hudi.exception.HoodieMetadataException: Error syncing to metadata 
> table.org.apache.hudi.exception.HoodieMetadataException: Error syncing to 
> metadata table.
>  at 
> org.apache.hudi.client.SparkRDDWriteClient.syncTableMetadata(SparkRDDWriteClient.java:447)
>  at 
> org.apache.hudi.client.AbstractHoodieWriteClient.postCommit(AbstractHoodieWriteClient.java:433)
>  at 
> org.apache.hudi.client.AbstractHoodieWriteClient.commitStats(AbstractHoodieWriteClient.java:187)
>  at 
> org.apache.hudi.client.SparkRDDWriteClient.commit(SparkRDDWriteClient.java:121)
>  at 
> org.apache.hudi.HoodieSparkSqlWriter$.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:564)
>  at 
> org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:230) 
> at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:162) at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to