ashok-workouts commented on issue #1977:
URL: https://github.com/apache/hudi/issues/1977#issuecomment-743721660


   Hi Friends,
   
   I have added jars and others job is working fine for less number of records
   to convert JSON to hudi format files, but in our requirement JSON files
   contain around 6000 records so it is failing with the following error.
   Could you please help me on this issue?
   Thanks in advance.
   
   
   Traceback (most recent call last): File "/tmp/gxp-campaign-hudi.py", line
   75, in write_json_to_hudi
   
delivered_df.write.format('hudi').options(**commonConfig).mode('append').save(target2_path)
   File "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py",
   line 734, in save self._jwrite.save(path) File
   "/opt/amazon/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py",
   line 1257, in __call__ answer, self.gateway_client, self.target_id,
   self.name) File
   "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63,
   in deco return f(*a, **kw) File
   "/opt/amazon/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line
   328, in get_return_value format(target_id, ".", name), value)
   py4j.protocol.Py4JJavaError: An error occurred while calling o160.save. :
   org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
   in stage 115.0 failed 4 times, most recent failure: Lost task 0.3 in stage
   115.0 (TID 55802, 172.34.40.188, executor 1):
   org.apache.hudi.exception.HoodieUpsertException: Error upserting bucketType
   UPDATE for partition :0 at
   
org.apache.hudi.table.action.commit.BaseCommitActionExecutor.handleUpsertPartition(BaseCommitActionExecutor.java:264)
   at
   
org.apache.hudi.table.action.commit.BaseCommitActionExecutor.lambda$execute$caffe4c4$1(BaseCommitActionExecutor.java:97)
   at
   
org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
   at
   
org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
   at
   
org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$25.apply(RDD.scala:853)
   at
   
org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$25.apply(RDD.scala:853)
   at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at
   org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at
   org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at
   org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at
   org.apache.spark.rdd.RDD$$anonfun$7.apply(RDD.scala:337) at
   org.apache.spark.rdd.RDD$$anonfun$7.apply(RDD.scala:335) at
   
org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1182)
   at
   
org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1156)
   at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1091) at
   org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1156)
   at
   org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:882)
   at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:335) at
   org.apache.spark.rdd.RDD.iterator(RDD.scala:286) at
   org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at
   org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at
   org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at
   org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at
   org.apache.spark.scheduler.Task.run(Task.scala:121) at
   
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at
   org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at
   
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   at
   
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   at java.lang.Thread.run(Thread.java:748) Caused by:
   org.apache.hudi.exception.HoodieException:
   org.apache.hudi.exception.HoodieException:
   java.util.concurrent.ExecutionException:
   org.apache.hudi.exception.HoodieException: operation has failed at
   
org.apache.hudi.table.action.commit.MergeHelper.runMerge(MergeHelper.java:100)
   at
   
org.apache.hudi.table.action.commit.CommitActionExecutor.handleUpdateInternal(CommitActionExecutor.java:89)
   at
   
org.apache.hudi.table.action.commit.CommitActionExecutor.handleUpdate(CommitActionExecutor.java:73)
   at
   
org.apache.hudi.table.action.commit.BaseCommitActionExecutor.handleUpsertPartition(BaseCommitActionExecutor.java:257)
   ... 30 more Caused by: org.apache.hudi.exception.HoodieException:
   java.util.concurrent.ExecutionException:
   org.apache.hudi.exception.HoodieException: operation has failed at
   
org.apache.hudi.common.util.queue.BoundedInMemoryExecutor.execute(BoundedInMemoryExecutor.java:143)
   at
   org.apache.hudi.table.action.commit.MergeHelper.runMerge(MergeHelper.java:98)
   ... 33 more Caused by: java.util.concurrent.ExecutionException:
   org.apache.hudi.exception.HoodieException: operation has failed at
   java.util.concurrent.FutureTask.report(FutureTask.java:122) at
   java.util.concurrent.FutureTask.get(FutureTask.java:192) at
   
org.apache.hudi.common.util.queue.BoundedInMemoryExecutor.execute(BoundedInMemoryExecutor.java:141)
   ... 34 more Caused by: org.apache.hudi.exception.HoodieException: operation
   has failed at
   
org.apache.hudi.common.util.queue.BoundedInMemoryQueue.throwExceptionIfFailed(BoundedInMemoryQueue.java:227)
   at
   
org.apache.hudi.common.util.queue.BoundedInMemoryQueue.readNextRecord(BoundedInMemoryQueue.java:206)
   at
   
org.apache.hudi.common.util.queue.BoundedInMemoryQueue.access$100(BoundedInMemoryQueue.java:52)
   at
   
org.apache.hudi.common.util.queue.BoundedInMemoryQueue$QueueIterator.hasNext(BoundedInMemoryQueue.java:257)
   at
   
org.apache.hudi.common.util.queue.BoundedInMemoryQueueConsumer.consume(BoundedInMemoryQueueConsumer.java:36)
   at
   
org.apache.hudi.common.util.queue.BoundedInMemoryExecutor.lambda$null$2(BoundedInMemoryExecutor.java:121)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266) ... 3 more
   Caused by: org.apache.parquet.io.InvalidRecordException: Parquet/Avro
   schema mismatch: Avro field 'date' not found at
   
org.apache.parquet.avro.AvroRecordConverter.getAvroField(AvroRecordConverter.java:225)
   at
   
org.apache.parquet.avro.AvroRecordConverter.<init>(AvroRecordConverter.java:130)
   at
   
org.apache.parquet.avro.AvroRecordConverter.<init>(AvroRecordConverter.java:95)
   at
   
org.apache.parquet.avro.AvroRecordMaterializer.<init>(AvroRecordMaterializer.java:33)
   at
   
org.apache.parquet.avro.AvroReadSupport.prepareForRead(AvroReadSupport.java:138)
   at
   
org.apache.parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:183)
   at
   org.apache.parquet.hadoop.ParquetReader.initReader(ParquetReader.java:156)
   at org.apache.parquet.hadoop.ParquetReader.read(ParquetReader.java:135) at
   
org.apache.hudi.common.util.ParquetReaderIterator.hasNext(ParquetReaderIterator.java:49)
   at
   
org.apache.hudi.common.util.queue.IteratorBasedQueueProducer.produce(IteratorBasedQueueProducer.java:45)
   at
   
org.apache.hudi.common.util.queue.BoundedInMemoryExecutor.lambda$null$0(BoundedInMemoryExecutor.java:92)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
   java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ...
   4 more Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org
   
<http://org.apache.spark.scheduler.dagscheduler.org/>$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1889)
   at
   
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)
   at
   
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1876)
   at
   
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at
   org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1876)
   at
   
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
   at
   
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
   at scala.Option.foreach(Option.scala:257) at
   
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
   at
   
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2110)
   at
   
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2059)
   at
   
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2048)
   at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) at
   org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737) at
   org.apache.spark.SparkContext.runJob(SparkContext.scala:2061) at
   org.apache.spark.SparkContext.runJob(SparkContext.scala:2082) at
   org.apache.spark.SparkContext.runJob(SparkContext.scala:2101) at
   org.apache.spark.SparkContext.runJob(SparkContext.scala:2126) at
   org.apache.spark.rdd.RDD.count(RDD.scala:1168) at
   
org.apache.hudi.HoodieSparkSqlWriter$.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:389)
   at
   org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:205)
   at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:125) at
   
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
   at
   
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
   at
   
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
   at
   
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
   at
   
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
   at
   
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
   at
   
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
   at
   
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
   at
   org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) at
   
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
   at
   org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
   at
   
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
   at
   
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
   at
   
org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
   at
   
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
   at
   
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
   at
   org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
   at
   
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
   at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271) at
   org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229) at
   sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
   sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at
   
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:498) at
   py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at
   py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at
   py4j.Gateway.invoke(Gateway.java:282) at
   py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at
   py4j.commands.CallCommand.execute(CallCommand.java:79) at
   py4j.GatewayConnection.run(GatewayConnection.java:238) at
   java.lang.Thread.run(Thread.java:748) Caused by:
   org.apache.hudi.exception.HoodieUpsertException: Error upserting bucketType
   UPDATE for partition :0 at
   
org.apache.hudi.table.action.commit.BaseCommitActionExecutor.handleUpsertPartition(BaseCommitActionExecutor.java:264)
   at
   
org.apache.hudi.table.action.commit.BaseCommitActionExecutor.lambda$execute$caffe4c4$1(BaseCommitActionExecutor.java:97)
   at
   
org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
   at
   
org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
   at
   
org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$25.apply(RDD.scala:853)
   at
   
org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$25.apply(RDD.scala:853)
   at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at
   org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at
   org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at
   org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at
   org.apache.spark.rdd.RDD$$anonfun$7.apply(RDD.scala:337) at
   org.apache.spark.rdd.RDD$$anonfun$7.apply(RDD.scala:335) at
   
org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1182)
   at
   
org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1156)
   at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1091) at
   org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1156)
   at
   org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:882)
   at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:335) at
   org.apache.spark.rdd.RDD.iterator(RDD.scala:286) at
   org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at
   org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at
   org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at
   org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at
   org.apache.spark.scheduler.Task.run(Task.scala:121) at
   
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
   at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at
   org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at
   
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   at
   
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   ... 1 more Caused by: org.apache.hudi.exception.HoodieException:
   org.apache.hudi.exception.HoodieException:
   java.util.concurrent.ExecutionException:
   org.apache.hudi.exception.HoodieException: operation has failed at
   
org.apache.hudi.table.action.commit.MergeHelper.runMerge(MergeHelper.java:100)
   at
   
org.apache.hudi.table.action.commit.CommitActionExecutor.handleUpdateInternal(CommitActionExecutor.java:89)
   at
   
org.apache.hudi.table.action.commit.CommitActionExecutor.handleUpdate(CommitActionExecutor.java:73)
   at
   
org.apache.hudi.table.action.commit.BaseCommitActionExecutor.handleUpsertPartition(BaseCommitActionExecutor.java:257)
   ... 30 more Caused by: org.apache.hudi.exception.HoodieException:
   java.util.concurrent.ExecutionException:
   org.apache.hudi.exception.HoodieException: operation has failed at
   
org.apache.hudi.common.util.queue.BoundedInMemoryExecutor.execute(BoundedInMemoryExecutor.java:143)
   at
   org.apache.hudi.table.action.commit.MergeHelper.runMerge(MergeHelper.java:98)
   ... 33 more Caused by: java.util.concurrent.ExecutionException:
   org.apache.hudi.exception.HoodieException: operation has failed at
   java.util.concurrent.FutureTask.report(FutureTask.java:122) at
   java.util.concurrent.FutureTask.get(FutureTask.java:192) at
   
org.apache.hudi.common.util.queue.BoundedInMemoryExecutor.execute(BoundedInMemoryExecutor.java:141)
   ... 34 more Caused by: org.apache.hudi.exception.HoodieException: operation
   has failed at
   
org.apache.hudi.common.util.queue.BoundedInMemoryQueue.throwExceptionIfFailed(BoundedInMemoryQueue.java:227)
   at
   
org.apache.hudi.common.util.queue.BoundedInMemoryQueue.readNextRecord(BoundedInMemoryQueue.java:206)
   at
   
org.apache.hudi.common.util.queue.BoundedInMemoryQueue.access$100(BoundedInMemoryQueue.java:52)
   at
   
org.apache.hudi.common.util.queue.BoundedInMemoryQueue$QueueIterator.hasNext(BoundedInMemoryQueue.java:257)
   at
   
org.apache.hudi.common.util.queue.BoundedInMemoryQueueConsumer.consume(BoundedInMemoryQueueConsumer.java:36)
   at
   
org.apache.hudi.common.util.queue.BoundedInMemoryExecutor.lambda$null$2(BoundedInMemoryExecutor.java:121)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266) ... 3 more
   Caused by: org.apache.parquet.io.InvalidRecordException: Parquet/Avro
   schema mismatch: Avro field 'date' not found at
   
org.apache.parquet.avro.AvroRecordConverter.getAvroField(AvroRecordConverter.java:225)
   at
   
org.apache.parquet.avro.AvroRecordConverter.<init>(AvroRecordConverter.java:130)
   at
   
org.apache.parquet.avro.AvroRecordConverter.<init>(AvroRecordConverter.java:95)
   at
   
org.apache.parquet.avro.AvroRecordMaterializer.<init>(AvroRecordMaterializer.java:33)
   at
   
org.apache.parquet.avro.AvroReadSupport.prepareForRead(AvroReadSupport.java:138)
   at
   
org.apache.parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:183)
   at
   org.apache.parquet.hadoop.ParquetReader.initReader(ParquetReader.java:156)
   at org.apache.parquet.hadoop.ParquetReader.read(ParquetReader.java:135) at
   
org.apache.hudi.common.util.ParquetReaderIterator.hasNext(ParquetReaderIterator.java:49)
   at
   
org.apache.hudi.common.util.queue.IteratorBasedQueueProducer.produce(IteratorBasedQueueProducer.java:45)
   at
   
org.apache.hudi.common.util.queue.BoundedInMemoryExecutor.lambda$null$0(BoundedInMemoryExecutor.java:92)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
   java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ...
   4 more During handling of the above exception, another exception occurred:
   Traceback (most recent call last): File "/tmp/gxp-campaign-hudi.py", line
   44, in read_json_rename_columns write_json_to_hudi(target_bucket, df) File
   "/tmp/gxp-campaign-hudi.py", line 78, in write_json_to_hudi
   logging.exception("GXP: The function 'write_json_to_hudi()' failed to write
   as hudi format. Error Detected:{1}".format(str(e))) IndexError: tuple index
   out of range During handling of the above exception, another exception
   occurred: Traceback (most recent call last): File
   "/tmp/gxp-campaign-hudi.py", line 99, in get_json_prefix
   read_json_rename_columns(source_bucket, prefix_set) File
   "/tmp/gxp-campaign-hudi.py", line 46, in read_json_rename_columns
   logging.exception("GXP: The function 'read_json_rename_columns()' failed to
   read json file. Error Detected:{1}".format(str(e))) IndexError: tuple index
   out of range During handling of the above exception, another exception
   occurred: Traceback (most recent call last): File
   "/tmp/gxp-campaign-hudi.py", line 116, in <module>
   get_json_prefix(s3_client, source_bucket, s3_prefix) File
   "/tmp/gxp-campaign-hudi.py", line 101, in get_json_prefix
   logging.exception("GXP: The function 'get_json_prefix()' failed to get json
   file prefix. Error Detected:{1}".format(str(e))) IndexError: tuple index
   out of range
   
   
   Best Regards,
   Ashok,
   +91 9600078919
   
   
   
   On Mon, Dec 7, 2020, 11:24 AM Karthick Natarajan <[email protected]>
   wrote:
   
   > @ashok-workouts <https://github.com/ashok-workouts> Verify if you're
   > using the same version of spark-avro dependency as your spark version and
   > also the scala build versions between spark-avro and hudi-spark-bundle.
   >
   > —
   > You are receiving this because you were mentioned.
   > Reply to this email directly, view it on GitHub
   > <https://github.com/apache/hudi/issues/1977#issuecomment-739684444>, or
   > unsubscribe
   > 
<https://github.com/notifications/unsubscribe-auth/AQPUBJVG2MNHOYCORXOZDT3STRUY7ANCNFSM4QDKBVPQ>
   > .
   >
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to