This is an automated email from the ASF dual-hosted git repository.

zhangyue19921010 pushed a change to branch rfc-60-ossstorage-poc
in repository https://gitbox.apache.org/repos/asf/hudi.git


    from 8eab65592a5 poc finished
     add c56b6a3fc1e Revert "fix checkstyle"
     add 13986f849b1 Revert "flink schedule compaction with incremental 
partitions"

No new revisions were added by this update.

Summary of changes:
 .../hudi/client/BaseHoodieTableServiceClient.java  |   9 +-
 .../java/org/apache/hudi/table/HoodieTable.java    |   4 +-
 .../compact/ScheduleCompactionActionExecutor.java  |  13 +-
 .../BaseHoodieCompactionPlanGenerator.java         |  16 +--
 .../generators/HoodieCompactionPlanGenerator.java  |   7 +-
 .../HoodieLogCompactionPlanGenerator.java          |   4 +-
 .../hudi/table/HoodieFlinkCopyOnWriteTable.java    |   4 +-
 .../hudi/table/HoodieFlinkMergeOnReadTable.java    |  13 +-
 .../hudi/table/HoodieJavaCopyOnWriteTable.java     |   4 +-
 .../hudi/table/HoodieJavaMergeOnReadTable.java     |   8 +-
 .../hudi/table/HoodieSparkCopyOnWriteTable.java    |   4 +-
 .../hudi/table/HoodieSparkMergeOnReadTable.java    |   8 +-
 .../apache/hudi/configuration/FlinkOptions.java    |   6 -
 .../hudi/sink/StreamWriteOperatorCoordinator.java  | 140 +++------------------
 .../java/org/apache/hudi/util/CompactionUtil.java  |  14 +--
 .../apache/hudi/sink/ITTestDataStreamWrite.java    |  14 ---
 ...CoordinatorWithIncrementalCompactionEnable.java |  67 ----------
 ...eadWithIncrementalScheduleCompactionEnable.java |  32 -----
 .../org/apache/hudi/utils/TestCompactionUtil.java  |   4 +-
 .../table/action/compact/TestHoodieCompactor.java  |   4 +-
 20 files changed, 55 insertions(+), 320 deletions(-)
 delete mode 100644 
hudi-flink-datasource/hudi-flink/src/test/java/org/apache/hudi/sink/TestStreamWriteOperatorCoordinatorWithIncrementalCompactionEnable.java
 delete mode 100644 
hudi-flink-datasource/hudi-flink/src/test/java/org/apache/hudi/sink/TestWriteMergeOnReadWithIncrementalScheduleCompactionEnable.java

Reply via email to