This is an automated email from the ASF dual-hosted git repository.
yihua pushed a change to branch branch-0.x
in repository https://gitbox.apache.org/repos/asf/hudi.git
from 91e176c0ef7 [HUDI-7431] Add replication and block size to
StoragePathInfo to be backwards compatible (#10717)
new eccd183a3d3 [HUDI-7452] Repartition row dataset in S3/GCS based on
task size (#10777)
new 1360b821ac6 [HUDI-7456] Set 'hudi' as the explicit provider for new
table properties when create table by spark (#10776)
new dfd40dc7d8f [HUDI-7385] Add config for custom write support for
parquet row writer (#10598)
new 29a2a6ccd4b Revert "[HUDI-6438] Config parameter
'MAKE_NEW_COLUMNS_NULLABLE' to allow for marking a newly created column as
nullable." (#10782)
new a598bd555f1 [HUDI-7459] Update hudi-gcp-bundle pom (#10790)
new 7a5719e0660 [HUDI-7462] Refactor checkTopicCheckpoint in
KafkaOffsetGen for reusability (#10794)
new 0127db94e12 [HUDI-7464] Fix minor bugs in kafka post-processing
related code (#10772)
new 937513b6510 [MINOR] Fix violations of Sonarqube rule java:S2184
(#10444)
The 8 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails. The revisions
listed as "add" were already present in the repository and have only
been added to this reference.
Summary of changes:
.../hudi/index/hbase/SparkHoodieHBaseIndex.java | 2 +-
.../io/storage/HoodieSparkFileWriterFactory.java | 19 ++++---
.../row/HoodieInternalRowFileWriterFactory.java | 4 +-
.../storage/row/HoodieRowParquetWriteSupport.java | 19 +++++--
.../hbase/TestHBasePutBatchSizeCalculator.java | 12 ++--
.../hudi/common/config/HoodieCommonConfig.java | 9 ---
.../hudi/common/config/HoodieStorageConfig.java | 11 ++++
.../hudi/common/fs/SizeAwareDataOutputStream.java | 2 +-
.../schema/utils/AvroSchemaEvolutionUtils.java | 9 +--
hudi-gcp/pom.xml | 2 +-
.../scala/org/apache/hudi/DataSourceOptions.scala | 2 -
.../scala/org/apache/hudi/HoodieSchemaUtils.scala | 2 +-
.../scala/org/apache/hudi/HoodieWriterUtils.scala | 1 -
.../hudi/command/CreateHoodieTableCommand.scala | 7 ++-
.../row/TestHoodieInternalRowParquetWriter.java | 3 +-
.../apache/hudi/functional/TestCOWDataSource.scala | 47 +---------------
.../apache/spark/sql/hudi/TestCreateTable.scala | 3 +-
.../hudi/utilities/HoodieWithTimelineServer.java | 2 +-
.../utilities/schema/KafkaOffsetPostProcessor.java | 35 ++++++++----
.../hudi/utilities/sources/JsonKafkaSource.java | 4 +-
.../helpers/CloudObjectsSelectorCommon.java | 15 ++++-
.../utilities/sources/helpers/KafkaOffsetGen.java | 23 ++++----
.../schema/TestKafkaOffsetPostProcessor.java | 65 ++++++++++++++++++++++
.../utilities/sources/TestJsonKafkaSource.java | 3 +
.../helpers/TestCloudObjectsSelectorCommon.java | 27 ++++++++-
.../state=CA => country=IND/state=TS}/data.json | 0
.../country=US/{state=CA => state=TX}/data.json | 0
packaging/hudi-gcp-bundle/pom.xml | 2 +-
pom.xml | 1 +
29 files changed, 208 insertions(+), 123 deletions(-)
create mode 100644
hudi-utilities/src/test/java/org/apache/hudi/utilities/schema/TestKafkaOffsetPostProcessor.java
copy hudi-utilities/src/test/resources/data/partitioned/{country=US/state=CA
=> country=IND/state=TS}/data.json (100%)
copy hudi-utilities/src/test/resources/data/partitioned/country=US/{state=CA
=> state=TX}/data.json (100%)