zyclove opened a new issue, #9119:
URL: https://github.com/apache/hudi/issues/9119

   **Describe the problem you faced**
   
   Occasionally, a schema null error occurs when writing data.
   By the way, when will the next version be released?
   
   
   ```java
   private void upsertAllAction(Dataset<Row> jsonDataSet, long maxUseMemory, 
String tempPath) {
           int dataKeepTime = 3 * 24 * 60 / config.getTriggerTime();
           jsonDataSet.write()
                   .format("org.apache.hudi")
                   .option(HoodieTableConfig.TYPE.key(), 
HoodieTableType.MERGE_ON_READ.name())
                   .option(DataSourceWriteOptions.OPERATION().key(), 
WriteOperationType.UPSERT.value())
                   .option(DataSourceWriteOptions.TABLE_TYPE().key(), 
HoodieTableType.MERGE_ON_READ.name())
                   .option(KeyGeneratorOptions.RECORDKEY_FIELD_NAME.key(), 
config.getIdName())
                   .option(KeyGeneratorOptions.PARTITIONPATH_FIELD_NAME.key(), 
Constants.DT)
                   
.option(KeyGeneratorOptions.HIVE_STYLE_PARTITIONING_ENABLE.key(), true)
                   .option(HoodieWriteConfig.PRECOMBINE_FIELD_NAME.key(), 
Constants.UPDATE_TIME)
                   .option(HoodieWriteConfig.COMBINE_BEFORE_UPSERT.key(), false)
                   .option(HoodieWriteConfig.UPSERT_PARALLELISM_VALUE.key(), 
200)
                   
.option(HoodieWriteConfig.FINALIZE_WRITE_PARALLELISM_VALUE.key(), 200)
                   .option(HoodieWriteConfig.WRITE_PAYLOAD_CLASS_NAME.key(), 
DefaultHoodieRecordPayload.class.getName())
                   
.option(HoodieWriteConfig.AVRO_EXTERNAL_SCHEMA_TRANSFORMATION_ENABLE.key(), 
true)
                   .option(HoodieWriteConfig.AVRO_SCHEMA_VALIDATE_ENABLE.key(), 
true)
                   
.option(HoodieWriteConfig.SCHEMA_ALLOW_AUTO_EVOLUTION_COLUMN_DROP.key(), true)
                   .option(HoodieCommonConfig.RECONCILE_SCHEMA.key(), true)
                   .option(HoodieCommonConfig.SCHEMA_EVOLUTION_ENABLE.key(), 
true)
                   .option(HoodieWriteConfig.MARKERS_TYPE.key(), 
MarkerType.DIRECT.toString())
                   .option(HoodieWriteConfig.COMBINE_BEFORE_INSERT.key(), true)
                   .option(HoodiePayloadConfig.PAYLOAD_CLASS_NAME.key(), 
DefaultHoodieRecordPayload.class.getName())
                   .option(HoodieCleanConfig.CLEANER_COMMITS_RETAINED.key(), 
dataKeepTime)
                   .option(HoodieCleanConfig.AUTO_CLEAN.key(), true)
                   
.option(HoodieCleanConfig.CLEANER_INCREMENTAL_MODE_ENABLE.key(), true)
                   .option(HoodieArchivalConfig.MIN_COMMITS_TO_KEEP.key(), 
dataKeepTime + 1)
                   .option(HoodieArchivalConfig.MAX_COMMITS_TO_KEEP.key(), 
dataKeepTime * 3)
                   
.option(HoodieCompactionConfig.TARGET_IO_PER_COMPACTION_IN_MB.key(), 500 * 1024)
                   .option(HoodieCleanConfig.CLEANER_POLICY.key(), 
HoodieCleaningPolicy.KEEP_LATEST_BY_HOURS.name())
                   .option(HoodieCleanConfig.CLEANER_HOURS_RETAINED.key(), 72)
                   
.option(HoodieCompactionConfig.PARQUET_SMALL_FILE_LIMIT.key(), 128 * 1024 * 
1024)
                   .option(HoodieStorageConfig.PARQUET_MAX_FILE_SIZE.key(), 256 
* 1024 * 1024)
                   .option(HoodieCompactionConfig.INLINE_COMPACT.key(), true)
                   
.option(HoodieCompactionConfig.INLINE_COMPACT_NUM_DELTA_COMMITS.key(), 0)
                   .option(HoodieMemoryConfig.SPILLABLE_MAP_BASE_PATH.key(), 
tempPath)
                   .option(HoodieMetadataConfig.ENABLE.key(), true)
                   .option(HoodieMetadataConfig.MIN_COMMITS_TO_KEEP.key(), 
dataKeepTime + 1)
                   .option(HoodieMetadataConfig.MAX_COMMITS_TO_KEEP.key(), 
dataKeepTime + 2)
                   .option(HoodieMetadataConfig.CLEANER_COMMITS_RETAINED.key(), 
dataKeepTime)
                   .option(HoodieMemoryConfig.MAX_MEMORY_FOR_MERGE.key(), 
maxUseMemory)
                   .option(HoodieMemoryConfig.MAX_MEMORY_FOR_COMPACTION.key(), 
maxUseMemory)
                   .option(HoodiePayloadProps.PAYLOAD_ORDERING_FIELD_PROP_KEY, 
Constants.UPDATE_TIME)
                   
.option(HoodiePayloadProps.PAYLOAD_EVENT_TIME_FIELD_PROP_KEY, 
Constants.UPDATE_TIME)
                   .option(HoodieTableConfig.NAME.key(), config.getName())
                   .option(HoodieTableConfig.KEY_GENERATOR_CLASS_NAME.key(), 
SimpleKeyGenerator.class.getName())
                   .option(HoodieIndexConfig.INDEX_TYPE.key(), 
HoodieIndex.IndexType.SIMPLE.name())
                   
.option(DataSourceReadOptions.EXTRACT_PARTITION_VALUES_FROM_PARTITION_PATH().key(),
 true)
                   .option(HoodieIndexConfig.BUCKET_INDEX_NUM_BUCKETS.key(), 20)
                   .option(HoodieIndexConfig.BUCKET_INDEX_ENGINE_TYPE.key(), 
HoodieIndex.BucketIndexEngineType.SIMPLE.name())
                   .option(HoodieLayoutConfig.LAYOUT_TYPE.key(), 
HoodieStorageLayout.LayoutType.DEFAULT.name())
                   
.option(HoodieLayoutConfig.LAYOUT_PARTITIONER_CLASS_NAME.key(), 
SparkBucketIndexPartitioner.class.getName())
                   .option(HoodieWriteConfig.WRITE_CONCURRENCY_MODE.key(), 
WriteConcurrencyMode.OPTIMISTIC_CONCURRENCY_CONTROL.name())
                   
.option(HoodieCleanConfig.FAILED_WRITES_CLEANER_POLICY.key(), 
HoodieFailedWritesCleaningPolicy.LAZY.name())
                   .option("hoodie.write.lock.provider", 
HiveMetastoreBasedLockProvider.class.getName())
                   .option("hoodie.write.lock.hivemetastore.database", 
"bi_ods_real")
                   .option("hoodie.write.lock.hivemetastore.table", 
getTableName())
                   .mode(SaveMode.Append)
                   .save(config.getSinkPath());
       }
   ``` 
   
   **Expected behavior**
   
   A clear and concise description of what you expected to happen.
   
   **Environment Description**
   
   * Hudi version :0.13
   
   * Spark version :3.2.2
   
   * Hive version :3.1.2
   
   * Hadoop version :3.2.2
   
   * Storage (HDFS/S3/GCS..) :s3
   
   * Running on Docker? (yes/no) :no
   
   
   **Stacktrace**
   
   ```Add the stacktrace of the error.```
   ```
   3/07/04 06:19:18 WARN InternalKafkaConsumerPool: Pool exceeds its soft max 
size, cleaning up idle objects...
   23/07/04 06:19:19 ERROR BaseSparkCommitActionExecutor: Error upserting 
bucketType UPDATE for partition :13
   org.apache.avro.SchemaParseException: Cannot parse <null> schema
           at org.apache.avro.Schema.parse(Schema.java:1633)
           at org.apache.avro.Schema$Parser.parse(Schema.java:1430)
           at org.apache.avro.Schema$Parser.parse(Schema.java:1418)
           at 
org.apache.hudi.common.util.InternalSchemaCache.getInternalSchemaByVersionId(InternalSchemaCache.java:225)
           at 
org.apache.hudi.common.util.InternalSchemaCache.getInternalSchemaByVersionId(InternalSchemaCache.java:231)
           at 
org.apache.hudi.table.action.commit.HoodieMergeHelper.composeSchemaEvolutionTransformer(HoodieMergeHelper.java:183)
           at 
org.apache.hudi.table.action.commit.HoodieMergeHelper.runMerge(HoodieMergeHelper.java:96)
           at 
org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.handleUpdateInternal(BaseSparkCommitActionExecutor.java:372)
           at 
org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.handleUpdate(BaseSparkCommitActionExecutor.java:363)
           at 
org.apache.hudi.table.action.deltacommit.BaseSparkDeltaCommitActionExecutor.handleUpdate(BaseSparkDeltaCommitActionExecutor.java:79)
           at 
org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.handleUpsertPartition(BaseSparkCommitActionExecutor.java:329)
           at 
org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.lambda$mapPartitionsAsRDD$a3ab3c4$1(BaseSparkCommitActionExecutor.java:251)
           at 
org.apache.spark.api.java.JavaRDDLike.$anonfun$mapPartitionsWithIndex$1(JavaRDDLike.scala:102)
           at 
org.apache.spark.api.java.JavaRDDLike.$anonfun$mapPartitionsWithIndex$1$adapted(JavaRDDLike.scala:102)
           at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsWithIndex$2(RDD.scala:915)
           at 
org.apache.spark.rdd.RDD.$anonfun$mapPartitionsWithIndex$2$adapted(RDD.scala:915)
           at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
           at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
           at org.apache.spark.rdd.RDD.$anonfun$getOrCompute$1(RDD.scala:386)
           at 
org.apache.spark.storage.BlockManager.$anonfun$doPutIterator$1(BlockManager.scala:1498)
           at 
org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$doPut(BlockManager.scala:1408)
           at 
org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1472)
   at 
org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:1295)
           at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:384)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:335)
           at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
           at org.apache.spark.scheduler.Task.run(Task.scala:133)
           at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:506)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1474)
           at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:509)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:750)
   Caused by: org.apache.avro.SchemaParseException: Cannot parse <null> schema
           at org.apache.avro.Schema.parse(Schema.java:1633)
           at org.apache.avro.Schema$Parser.parse(Schema.java:1430)
           at org.apache.avro.Schema$Parser.parse(Schema.java:1418)
           at 
org.apache.hudi.common.util.InternalSchemaCache.getInternalSchemaByVersionId(InternalSchemaCache.java:225)
           at 
org.apache.hudi.common.util.InternalSchemaCache.getInternalSchemaByVersionId(InternalSchemaCache.java:231)
    at 
org.apache.hudi.common.util.InternalSchemaCache.getInternalSchemaByVersionId(InternalSchemaCache.java:231)
           at 
org.apache.hudi.table.action.commit.HoodieMergeHelper.composeSchemaEvolutionTransformer(HoodieMergeHelper.java:183)
           at 
org.apache.hudi.table.action.commit.HoodieMergeHelper.runMerge(HoodieMergeHelper.java:96)
           at 
org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.handleUpdateInternal(BaseSparkCommitActionExecutor.java:372)
           at 
org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.handleUpdate(BaseSparkCommitActionExecutor.java:363)
           at 
org.apache.hudi.table.action.deltacommit.BaseSparkDeltaCommitActionExecutor.handleUpdate(BaseSparkDeltaCommitActionExecutor.java:79)
           at 
org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.handleUpsertPartition(BaseSparkCommitActionExecutor.java:329)
   ``` 
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to