[
https://issues.apache.org/jira/browse/HUDI-2969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17456433#comment-17456433
]
Javier commented on HUDI-2969:
------------------------------
[https://github.com/zubipower/StackOverFlowHudiError/blob/main/StackOverflowErrorTest.java]
also code added there
I can provide pom.xml and jvm settings if necessary
this is running on jdk 8
<scala.binary.version>2.12</scala.binary.version>
<spark.version>3.0.0</spark.version>
<hudi.version>0.9.0</hudi.version>
<avro.version>1.8.2</avro.version>
<hadoop.version>2.7.3</hadoop.version>
<parquet.version>1.10.1</parquet.version>
> start overflow using bulk_insert - hoodie.datasource.write.operation
> --------------------------------------------------------------------
>
> Key: HUDI-2969
> URL: https://issues.apache.org/jira/browse/HUDI-2969
> Project: Apache Hudi
> Issue Type: Bug
> Reporter: Javier
> Priority: Major
> Attachments: StackOverflowErrorTest.java, fullstacktrace.txt
>
>
> Hi
> when I try to use bulk_insert with a large number of fields in schema, I get
> a stack overflow
> this doesn't happen when I use insert of upsert.
> I have attached a sample main to reproduce the error and also the stack trace
> please let me know if you need more information
> Many thanks
>
--
This message was sent by Atlassian Jira
(v8.20.1#820001)