[jira] [Closed] (HUDI-517) compact error when hoodie.compact.inline is true

2020-01-13 Thread liujianhui (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liujianhui closed HUDI-517.
---
Resolution: Fixed

> compact error when hoodie.compact.inline is true
> 
>
> Key: HUDI-517
> URL: https://issues.apache.org/jira/browse/HUDI-517
> Project: Apache Hudi (incubating)
>  Issue Type: Bug
>  Components: Compaction
>Reporter: liujianhui
>Priority: Minor
>
> # set the property [hoodie.compact.inline|http://hoodie.compact.inline/] as 
> true
>  # the duration of the write process is 1 second
>  # the instant time of the compact is same to the commit instant time
>  
> ```
> java.lang.IllegalArgumentException: Following instants have timestamps >= 
> compactionInstant (20200110171526) Instants 
> :[[20200110171526__deltacommit__COMPLETED]]
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
>  at 
> org.apache.hudi.HoodieWriteClient.scheduleCompactionAtInstant(HoodieWriteClient.java:1043)
>  at 
> org.apache.hudi.HoodieWriteClient.scheduleCompaction(HoodieWriteClient.java:1018)
>  at 
> org.apache.hudi.HoodieWriteClient.forceCompact(HoodieWriteClient.java:1292)
>  at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:510)
>  at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:479)
>  at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:470)
>  at 
> org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:152)
>  at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:91)
>  at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
>  at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
>  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
>  at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
>  at 
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
>  at 
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
>  at 
> org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
>  at 
> org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
>  at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
>  at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
>  at 
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
>  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
> ```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HUDI-518) compact error when hoodie.compact.inline is true

2020-01-13 Thread liujianhui (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liujianhui resolved HUDI-518.
-
Resolution: Fixed

> compact error when hoodie.compact.inline is true
> 
>
> Key: HUDI-518
> URL: https://issues.apache.org/jira/browse/HUDI-518
> Project: Apache Hudi (incubating)
>  Issue Type: Bug
>  Components: Compaction, Writer Core
>Reporter: liujianhui
>Priority: Minor
>
> # set the property [hoodie.compact.inline|http://hoodie.compact.inline/] as 
> true
>  # the duration of the write process is 1 second
>  # the instant time of the compact is same to the commit instant time
>  
> {code}
> java.lang.IllegalArgumentException: Following instants have timestamps >= 
> compactionInstant (20200110171526) Instants 
> :[[20200110171526__deltacommit__COMPLETED]]
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
>  at 
> org.apache.hudi.HoodieWriteClient.scheduleCompactionAtInstant(HoodieWriteClient.java:1043)
>  at 
> org.apache.hudi.HoodieWriteClient.scheduleCompaction(HoodieWriteClient.java:1018)
>  at 
> org.apache.hudi.HoodieWriteClient.forceCompact(HoodieWriteClient.java:1292)
>  at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:510)
>  at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:479)
>  at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:470)
>  at 
> org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:152)
>  at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:91)
>  at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
>  at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
>  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
>  at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
>  at 
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
>  at 
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
>  at 
> org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
>  at 
> org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
>  at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
>  at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
>  at 
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
>  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HUDI-518) compact error when hoodie.compact.inline is true

2020-01-13 Thread liujianhui (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liujianhui updated HUDI-518:

Status: Open  (was: New)

> compact error when hoodie.compact.inline is true
> 
>
> Key: HUDI-518
> URL: https://issues.apache.org/jira/browse/HUDI-518
> Project: Apache Hudi (incubating)
>  Issue Type: Bug
>  Components: Compaction, Writer Core
>Reporter: liujianhui
>Priority: Minor
>
> # set the property [hoodie.compact.inline|http://hoodie.compact.inline/] as 
> true
>  # the duration of the write process is 1 second
>  # the instant time of the compact is same to the commit instant time
>  
> {code}
> java.lang.IllegalArgumentException: Following instants have timestamps >= 
> compactionInstant (20200110171526) Instants 
> :[[20200110171526__deltacommit__COMPLETED]]
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
>  at 
> org.apache.hudi.HoodieWriteClient.scheduleCompactionAtInstant(HoodieWriteClient.java:1043)
>  at 
> org.apache.hudi.HoodieWriteClient.scheduleCompaction(HoodieWriteClient.java:1018)
>  at 
> org.apache.hudi.HoodieWriteClient.forceCompact(HoodieWriteClient.java:1292)
>  at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:510)
>  at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:479)
>  at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:470)
>  at 
> org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:152)
>  at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:91)
>  at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
>  at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
>  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
>  at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
>  at 
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
>  at 
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
>  at 
> org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
>  at 
> org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
>  at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
>  at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
>  at 
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
>  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HUDI-517) compact error when hoodie.compact.inline is true

2020-01-13 Thread liujianhui (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liujianhui updated HUDI-517:

Status: Open  (was: New)

> compact error when hoodie.compact.inline is true
> 
>
> Key: HUDI-517
> URL: https://issues.apache.org/jira/browse/HUDI-517
> Project: Apache Hudi (incubating)
>  Issue Type: Bug
>  Components: Compaction
>Reporter: liujianhui
>Priority: Minor
>
> # set the property [hoodie.compact.inline|http://hoodie.compact.inline/] as 
> true
>  # the duration of the write process is 1 second
>  # the instant time of the compact is same to the commit instant time
>  
> ```
> java.lang.IllegalArgumentException: Following instants have timestamps >= 
> compactionInstant (20200110171526) Instants 
> :[[20200110171526__deltacommit__COMPLETED]]
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
>  at 
> org.apache.hudi.HoodieWriteClient.scheduleCompactionAtInstant(HoodieWriteClient.java:1043)
>  at 
> org.apache.hudi.HoodieWriteClient.scheduleCompaction(HoodieWriteClient.java:1018)
>  at 
> org.apache.hudi.HoodieWriteClient.forceCompact(HoodieWriteClient.java:1292)
>  at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:510)
>  at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:479)
>  at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:470)
>  at 
> org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:152)
>  at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:91)
>  at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
>  at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
>  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
>  at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
>  at 
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
>  at 
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
>  at 
> org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
>  at 
> org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
>  at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
>  at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
>  at 
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
>  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
> ```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HUDI-526) inline compact not work

2020-01-13 Thread liujianhui (Jira)
liujianhui created HUDI-526:
---

 Summary: inline compact not work
 Key: HUDI-526
 URL: https://issues.apache.org/jira/browse/HUDI-526
 Project: Apache Hudi (incubating)
  Issue Type: Bug
  Components: Compaction
Reporter: liujianhui


hoodie.compact.inline set as true

hoodie.index.type set as INMEMEORY

 

compact not occur after dela commit

{code}

20/01/13 16:43:43 INFO HoodieMergeOnReadTable: Checking if compaction needs to 
be run on file:///tmp/hudi_cow_table_read
20/01/13 16:43:43 INFO HoodieMergeOnReadTable: Compacting merge on read table 
file:///tmp/hudi_cow_table_read
20/01/13 16:43:43 INFO FileSystemViewManager: Creating InMemory based view for 
basePath file:///tmp/hudi_cow_table_read
20/01/13 16:43:43 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient 
from file:///tmp/hudi_cow_table_read
20/01/13 16:43:43 INFO FSUtils: Hadoop Configuration: fs.defaultFS: [file:///], 
Config:[Configuration: core-default.xml, core-site.xml, mapred-default.xml, 
mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, 
hdfs-site.xml, __spark_hadoop_conf__.xml], FileSystem: 
[org.apache.hadoop.fs.LocalFileSystem@6a24b9e2]
20/01/13 16:43:43 INFO HoodieTableConfig: Loading table properties from 
file:/tmp/hudi_cow_table_read/.hoodie/hoodie.properties
20/01/13 16:43:43 INFO HoodieTableMetaClient: Finished Loading Table of type 
MERGE_ON_READ(version=org.apache.hudi.common.model.TimelineLayoutVersion@20) 
from file:///tmp/hudi_cow_table_read
20/01/13 16:43:43 INFO HoodieTableMetaClient: Loading Active commit timeline 
for file:///tmp/hudi_cow_table_read
20/01/13 16:43:43 INFO HoodieActiveTimeline: Loaded instants 
[[20200109181330__deltacommit__COMPLETED], 
[2020011017__deltacommit__COMPLETED], 
[20200110171526__deltacommit__COMPLETED], 
[20200113105844__deltacommit__COMPLETED], 
[20200113145851__deltacommit__COMPLETED], 
[20200113155502__deltacommit__COMPLETED], 
[20200113164342__deltacommit__COMPLETED]]
20/01/13 16:43:43 INFO HoodieRealtimeTableCompactor: Compacting 
file:///tmp/hudi_cow_table_read with commit 20200113164343
20/01/13 16:43:43 INFO HoodieTableMetaClient: Loading HoodieTableMetaClient 
from file:///tmp/hudi_cow_table_read
20/01/13 16:43:43 INFO FSUtils: Hadoop Configuration: fs.defaultFS: [file:///], 
Config:[Configuration: core-default.xml, core-site.xml, mapred-default.xml, 
mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, 
hdfs-site.xml, __spark_hadoop_conf__.xml], FileSystem: 
[org.apache.hadoop.fs.LocalFileSystem@6a24b9e2]
20/01/13 16:43:43 INFO HoodieTableConfig: Loading table properties from 
file:/tmp/hudi_cow_table_read/.hoodie/hoodie.properties
20/01/13 16:43:43 INFO HoodieTableMetaClient: Finished Loading Table of type 
MERGE_ON_READ(version=org.apache.hudi.common.model.TimelineLayoutVersion@20) 
from file:///tmp/hudi_cow_table_read
20/01/13 16:43:43 INFO HoodieTableMetaClient: Loading Active commit timeline 
for file:///tmp/hudi_cow_table_read
20/01/13 16:43:43 INFO HoodieActiveTimeline: Loaded instants 
[[20200109181330__deltacommit__COMPLETED], 
[2020011017__deltacommit__COMPLETED], 
[20200110171526__deltacommit__COMPLETED], 
[20200113105844__deltacommit__COMPLETED], 
[20200113145851__deltacommit__COMPLETED], 
[20200113155502__deltacommit__COMPLETED], 
[20200113164342__deltacommit__COMPLETED]]

{code} 

not compact time record in the .hoodie path



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HUDI-525) inserts info miss in delta_commit_inflight meta file

2020-01-12 Thread liujianhui (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liujianhui updated HUDI-525:

Priority: Minor  (was: Major)

> inserts info miss in delta_commit_inflight meta file
> 
>
> Key: HUDI-525
> URL: https://issues.apache.org/jira/browse/HUDI-525
> Project: Apache Hudi (incubating)
>  Issue Type: Bug
>Reporter: liujianhui
>Priority: Minor
>
> should add  insert info in WorkInfoStat
> {code}
> private void saveWorkloadProfileMetadataToInflight(WorkloadProfile profile, 
> HoodieTable table, String commitTime)
>  throws HoodieCommitException {
>  try {
>  HoodieCommitMetadata metadata = new HoodieCommitMetadata();
>  profile.getPartitionPaths().forEach(path -> {
>  WorkloadStat partitionStat = profile.getWorkloadStat(path.toString());
>  partitionStat.getUpdateLocationToCount().forEach((key, value) -> {
>  HoodieWriteStat writeStat = new HoodieWriteStat();
>  writeStat.setFileId(key);
>  // TODO : Write baseCommitTime is possible here ?
>  writeStat.setPrevCommit(value.getKey());
>  writeStat.setNumUpdateWrites(value.getValue());
>  metadata.addWriteStat(path.toString(), writeStat);
>  });
>  });
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HUDI-525) inserts info miss in delta_commit_inflight meta file

2020-01-12 Thread liujianhui (Jira)


[ 
https://issues.apache.org/jira/browse/HUDI-525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17014094#comment-17014094
 ] 

liujianhui edited comment on HUDI-525 at 1/13/20 7:50 AM:
--

{code}

cat 20200113145851.deltacommit.inflight

{ "partitionToWriteStats" : \\{ }

,
 "compacted" : false,
 "extraMetadataMap" : \{ },
 "totalScanTime" : 0,
 "totalCreateTime" : 0,
 "totalUpsertTime" : 0,
 "totalCompactedRecordsUpdated" : 0,
 "totalLogFilesSize" : 0,
 "totalLogFilesCompacted" : 0,
 "fileIdAndRelativePaths" : \{ },
 "totalRecordsDeleted" : 0,
 "totalLogRecordsCompacted" : 0,
 "extraMetadata" : \{ }
 }

{code}

the deltacommit should contains the insert info, event the field id is unknown 
at that moment


was (Author: liujianhuiouc):
{ code}

cat 20200113145851.deltacommit.inflight
{
 "partitionToWriteStats" : \{ },
 "compacted" : false,
 "extraMetadataMap" : \{ },
 "totalScanTime" : 0,
 "totalCreateTime" : 0,
 "totalUpsertTime" : 0,
 "totalCompactedRecordsUpdated" : 0,
 "totalLogFilesSize" : 0,
 "totalLogFilesCompacted" : 0,
 "fileIdAndRelativePaths" : \{ },
 "totalRecordsDeleted" : 0,
 "totalLogRecordsCompacted" : 0,
 "extraMetadata" : \{ }
}

{code}

the deltacommit should contains the insert info, event the field id is unknown 
at that moment

> inserts info miss in delta_commit_inflight meta file
> 
>
> Key: HUDI-525
> URL: https://issues.apache.org/jira/browse/HUDI-525
> Project: Apache Hudi (incubating)
>  Issue Type: Bug
>Reporter: liujianhui
>Priority: Major
>
> should add  insert info in WorkInfoStat
> {code}
> private void saveWorkloadProfileMetadataToInflight(WorkloadProfile profile, 
> HoodieTable table, String commitTime)
>  throws HoodieCommitException {
>  try {
>  HoodieCommitMetadata metadata = new HoodieCommitMetadata();
>  profile.getPartitionPaths().forEach(path -> {
>  WorkloadStat partitionStat = profile.getWorkloadStat(path.toString());
>  partitionStat.getUpdateLocationToCount().forEach((key, value) -> {
>  HoodieWriteStat writeStat = new HoodieWriteStat();
>  writeStat.setFileId(key);
>  // TODO : Write baseCommitTime is possible here ?
>  writeStat.setPrevCommit(value.getKey());
>  writeStat.setNumUpdateWrites(value.getValue());
>  metadata.addWriteStat(path.toString(), writeStat);
>  });
>  });
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HUDI-525) inserts info miss in delta_commit_inflight meta file

2020-01-12 Thread liujianhui (Jira)


[ 
https://issues.apache.org/jira/browse/HUDI-525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17014094#comment-17014094
 ] 

liujianhui commented on HUDI-525:
-

{ code}

cat 20200113145851.deltacommit.inflight
{
 "partitionToWriteStats" : \{ },
 "compacted" : false,
 "extraMetadataMap" : \{ },
 "totalScanTime" : 0,
 "totalCreateTime" : 0,
 "totalUpsertTime" : 0,
 "totalCompactedRecordsUpdated" : 0,
 "totalLogFilesSize" : 0,
 "totalLogFilesCompacted" : 0,
 "fileIdAndRelativePaths" : \{ },
 "totalRecordsDeleted" : 0,
 "totalLogRecordsCompacted" : 0,
 "extraMetadata" : \{ }
}

{code}

the deltacommit should contains the insert info, event the field id is unknown 
at that moment

> inserts info miss in delta_commit_inflight meta file
> 
>
> Key: HUDI-525
> URL: https://issues.apache.org/jira/browse/HUDI-525
> Project: Apache Hudi (incubating)
>  Issue Type: Bug
>Reporter: liujianhui
>Priority: Major
>
> should add  insert info in WorkInfoStat
> {code}
> private void saveWorkloadProfileMetadataToInflight(WorkloadProfile profile, 
> HoodieTable table, String commitTime)
>  throws HoodieCommitException {
>  try {
>  HoodieCommitMetadata metadata = new HoodieCommitMetadata();
>  profile.getPartitionPaths().forEach(path -> {
>  WorkloadStat partitionStat = profile.getWorkloadStat(path.toString());
>  partitionStat.getUpdateLocationToCount().forEach((key, value) -> {
>  HoodieWriteStat writeStat = new HoodieWriteStat();
>  writeStat.setFileId(key);
>  // TODO : Write baseCommitTime is possible here ?
>  writeStat.setPrevCommit(value.getKey());
>  writeStat.setNumUpdateWrites(value.getValue());
>  metadata.addWriteStat(path.toString(), writeStat);
>  });
>  });
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HUDI-525) inserts info miss in delta_commit_inflight meta file

2020-01-12 Thread liujianhui (Jira)
liujianhui created HUDI-525:
---

 Summary: inserts info miss in delta_commit_inflight meta file
 Key: HUDI-525
 URL: https://issues.apache.org/jira/browse/HUDI-525
 Project: Apache Hudi (incubating)
  Issue Type: Bug
Reporter: liujianhui


should add  insert info in WorkInfoStat

{code}

private void saveWorkloadProfileMetadataToInflight(WorkloadProfile profile, 
HoodieTable table, String commitTime)
 throws HoodieCommitException {
 try {
 HoodieCommitMetadata metadata = new HoodieCommitMetadata();
 profile.getPartitionPaths().forEach(path -> {
 WorkloadStat partitionStat = profile.getWorkloadStat(path.toString());
 partitionStat.getUpdateLocationToCount().forEach((key, value) -> {
 HoodieWriteStat writeStat = new HoodieWriteStat();
 writeStat.setFileId(key);
 // TODO : Write baseCommitTime is possible here ?
 writeStat.setPrevCommit(value.getKey());
 writeStat.setNumUpdateWrites(value.getValue());
 metadata.addWriteStat(path.toString(), writeStat);
 });
 });

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HUDI-518) compact error when hoodie.compact.inline is true

2020-01-12 Thread liujianhui (Jira)


[ 
https://issues.apache.org/jira/browse/HUDI-518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17013966#comment-17013966
 ] 

liujianhui commented on HUDI-518:
-

i find that issue fixed in the master

{code}

public static String createNewInstantTime() {
 return lastInstantTime.updateAndGet((oldVal) -> {
 String newCommitTime = null;
 do {
 newCommitTime = HoodieActiveTimeline.COMMIT_FORMATTER.format(new Date());
 } while (HoodieTimeline.compareTimestamps(newCommitTime, oldVal, 
LESSER_OR_EQUAL));
 return newCommitTime;
 });
}

{code}

> compact error when hoodie.compact.inline is true
> 
>
> Key: HUDI-518
> URL: https://issues.apache.org/jira/browse/HUDI-518
> Project: Apache Hudi (incubating)
>  Issue Type: Bug
>  Components: Compaction, Writer Core
>Reporter: liujianhui
>Priority: Minor
>
> # set the property [hoodie.compact.inline|http://hoodie.compact.inline/] as 
> true
>  # the duration of the write process is 1 second
>  # the instant time of the compact is same to the commit instant time
>  
> {code}
> java.lang.IllegalArgumentException: Following instants have timestamps >= 
> compactionInstant (20200110171526) Instants 
> :[[20200110171526__deltacommit__COMPLETED]]
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
>  at 
> org.apache.hudi.HoodieWriteClient.scheduleCompactionAtInstant(HoodieWriteClient.java:1043)
>  at 
> org.apache.hudi.HoodieWriteClient.scheduleCompaction(HoodieWriteClient.java:1018)
>  at 
> org.apache.hudi.HoodieWriteClient.forceCompact(HoodieWriteClient.java:1292)
>  at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:510)
>  at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:479)
>  at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:470)
>  at 
> org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:152)
>  at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:91)
>  at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
>  at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
>  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
>  at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
>  at 
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
>  at 
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
>  at 
> org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
>  at 
> org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
>  at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
>  at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
>  at 
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
>  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HUDI-518) compact error when hoodie.compact.inline is true

2020-01-10 Thread liujianhui (Jira)


[ 
https://issues.apache.org/jira/browse/HUDI-518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17013063#comment-17013063
 ] 

liujianhui commented on HUDI-518:
-

the generate of the commit instant should greater than the commit instant time

> compact error when hoodie.compact.inline is true
> 
>
> Key: HUDI-518
> URL: https://issues.apache.org/jira/browse/HUDI-518
> Project: Apache Hudi (incubating)
>  Issue Type: Bug
>  Components: Compaction
>Reporter: liujianhui
>Priority: Minor
>
> # set the property [hoodie.compact.inline|http://hoodie.compact.inline/] as 
> true
>  # the duration of the write process is 1 second
>  # the instant time of the compact is same to the commit instant time
>  
> {code}
> java.lang.IllegalArgumentException: Following instants have timestamps >= 
> compactionInstant (20200110171526) Instants 
> :[[20200110171526__deltacommit__COMPLETED]]
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
>  at 
> org.apache.hudi.HoodieWriteClient.scheduleCompactionAtInstant(HoodieWriteClient.java:1043)
>  at 
> org.apache.hudi.HoodieWriteClient.scheduleCompaction(HoodieWriteClient.java:1018)
>  at 
> org.apache.hudi.HoodieWriteClient.forceCompact(HoodieWriteClient.java:1292)
>  at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:510)
>  at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:479)
>  at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:470)
>  at 
> org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:152)
>  at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:91)
>  at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
>  at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
>  at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
>  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
>  at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
>  at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
>  at 
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
>  at 
> org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
>  at 
> org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
>  at 
> org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
>  at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
>  at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
>  at 
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
>  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HUDI-518) compact error when hoodie.compact.inline is true

2020-01-10 Thread liujianhui (Jira)


 [ 
https://issues.apache.org/jira/browse/HUDI-518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liujianhui updated HUDI-518:

Description: 
# set the property [hoodie.compact.inline|http://hoodie.compact.inline/] as true
 # the duration of the write process is 1 second
 # the instant time of the compact is same to the commit instant time

 

{code}

java.lang.IllegalArgumentException: Following instants have timestamps >= 
compactionInstant (20200110171526) Instants 
:[[20200110171526__deltacommit__COMPLETED]]
 at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
 at 
org.apache.hudi.HoodieWriteClient.scheduleCompactionAtInstant(HoodieWriteClient.java:1043)
 at 
org.apache.hudi.HoodieWriteClient.scheduleCompaction(HoodieWriteClient.java:1018)
 at org.apache.hudi.HoodieWriteClient.forceCompact(HoodieWriteClient.java:1292)
 at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:510)
 at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:479)
 at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:470)
 at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:152)
 at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:91)
 at 
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
 at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
 at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
 at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
 at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
 at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
 at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
 at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
 at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
 at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
 at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
 at 
org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
 at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
 at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
 at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
 at 
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
 at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)

{code}

  was:
# set the property [hoodie.compact.inline|http://hoodie.compact.inline/] as true
 # the duration of the write process is 1 second
 # the instant time of the compact is same to the commit instant time

 

```

java.lang.IllegalArgumentException: Following instants have timestamps >= 
compactionInstant (20200110171526) Instants 
:[[20200110171526__deltacommit__COMPLETED]]
 at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
 at 
org.apache.hudi.HoodieWriteClient.scheduleCompactionAtInstant(HoodieWriteClient.java:1043)
 at 
org.apache.hudi.HoodieWriteClient.scheduleCompaction(HoodieWriteClient.java:1018)
 at org.apache.hudi.HoodieWriteClient.forceCompact(HoodieWriteClient.java:1292)
 at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:510)
 at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:479)
 at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:470)
 at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:152)
 at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:91)
 at 
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
 at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
 at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
 at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
 at 

[jira] [Created] (HUDI-517) compact error when hoodie.compact.inline is true

2020-01-10 Thread liujianhui (Jira)
liujianhui created HUDI-517:
---

 Summary: compact error when hoodie.compact.inline is true
 Key: HUDI-517
 URL: https://issues.apache.org/jira/browse/HUDI-517
 Project: Apache Hudi (incubating)
  Issue Type: Bug
  Components: Compaction
Reporter: liujianhui


# set the property [hoodie.compact.inline|http://hoodie.compact.inline/] as true
 # the duration of the write process is 1 second
 # the instant time of the compact is same to the commit instant time

 

```

java.lang.IllegalArgumentException: Following instants have timestamps >= 
compactionInstant (20200110171526) Instants 
:[[20200110171526__deltacommit__COMPLETED]]
 at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
 at 
org.apache.hudi.HoodieWriteClient.scheduleCompactionAtInstant(HoodieWriteClient.java:1043)
 at 
org.apache.hudi.HoodieWriteClient.scheduleCompaction(HoodieWriteClient.java:1018)
 at org.apache.hudi.HoodieWriteClient.forceCompact(HoodieWriteClient.java:1292)
 at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:510)
 at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:479)
 at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:470)
 at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:152)
 at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:91)
 at 
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
 at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
 at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
 at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
 at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
 at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
 at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
 at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
 at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
 at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
 at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
 at 
org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
 at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
 at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
 at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
 at 
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
 at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)

```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HUDI-518) compact error when hoodie.compact.inline is true

2020-01-10 Thread liujianhui (Jira)
liujianhui created HUDI-518:
---

 Summary: compact error when hoodie.compact.inline is true
 Key: HUDI-518
 URL: https://issues.apache.org/jira/browse/HUDI-518
 Project: Apache Hudi (incubating)
  Issue Type: Bug
  Components: Compaction
Reporter: liujianhui


# set the property [hoodie.compact.inline|http://hoodie.compact.inline/] as true
 # the duration of the write process is 1 second
 # the instant time of the compact is same to the commit instant time

 

```

java.lang.IllegalArgumentException: Following instants have timestamps >= 
compactionInstant (20200110171526) Instants 
:[[20200110171526__deltacommit__COMPLETED]]
 at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
 at 
org.apache.hudi.HoodieWriteClient.scheduleCompactionAtInstant(HoodieWriteClient.java:1043)
 at 
org.apache.hudi.HoodieWriteClient.scheduleCompaction(HoodieWriteClient.java:1018)
 at org.apache.hudi.HoodieWriteClient.forceCompact(HoodieWriteClient.java:1292)
 at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:510)
 at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:479)
 at org.apache.hudi.HoodieWriteClient.commit(HoodieWriteClient.java:470)
 at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:152)
 at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:91)
 at 
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
 at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
 at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
 at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
 at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
 at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
 at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
 at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
 at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
 at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
 at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
 at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
 at 
org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
 at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
 at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
 at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
 at 
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
 at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)

```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)