[ 
https://issues.apache.org/jira/browse/GOBBLIN-1957?focusedWorklogId=890994&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-890994
 ]

ASF GitHub Bot logged work on GOBBLIN-1957:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 16/Nov/23 17:24
            Start Date: 16/Nov/23 17:24
    Worklog Time Spent: 10m 
      Work Description: Will-Lo commented on code in PR #3828:
URL: https://github.com/apache/gobblin/pull/3828#discussion_r1396086996


##########
gobblin-modules/gobblin-orc/src/main/java/org/apache/gobblin/writer/GobblinBaseOrcWriter.java:
##########
@@ -133,6 +136,7 @@ public GobblinBaseOrcWriter(FsDataWriterBuilder<S, D> 
builder, State properties)
         GobblinOrcWriterConfigs.DEFAULT_MIN_ORC_WRITER_ROWCHECK);
     this.orcFileWriterMaxRowsBetweenCheck = 
properties.getPropAsInt(GobblinOrcWriterConfigs.ORC_WRITER_MAX_ROWCHECK,
         GobblinOrcWriterConfigs.DEFAULT_MAX_ORC_WRITER_ROWCHECK);
+    this.enableLimitBufferSizeOrcStripe = 
properties.getPropAsBoolean(GobblinOrcWriterConfigs.ORC_WRITER_ENABLE_BUFFER_LIMIT_ORC_STRIPE,
 false);

Review Comment:
   Yeah didn't want to introduce an unwanted regression that would be hard to 
roll back, can make this the static/default behavior if performance is good





Issue Time Tracking
-------------------

    Worklog Id:     (was: 890994)
    Time Spent: 50m  (was: 40m)

> Add feature to improve ORCWriter buffer sizes with large record sizes
> ---------------------------------------------------------------------
>
>                 Key: GOBBLIN-1957
>                 URL: https://issues.apache.org/jira/browse/GOBBLIN-1957
>             Project: Apache Gobblin
>          Issue Type: Improvement
>          Components: gobblin-core
>            Reporter: William Lo
>            Assignee: Abhishek Tiwari
>            Priority: Major
>          Time Spent: 50m
>  Remaining Estimate: 0h
>
> GobblinORCWriter self tune uses a number of metrics to determine how large 
> their buffers should be for both its own internal buffer used for conversion 
> and the native ORC writer buffer. However when there are very large record 
> sizes (100s of kb) the buffers default max size (e.g. 1000) can still hold a 
> very large amount of data. Observed performance would be hundreds of 
> megabytes to even a gigabyte depending on the configured batch size maximums.
> We want a configuration to impose a maximum buffer max size so that large 
> records in the buffer do not exceed the size of a stripe, so when it is added 
> to the native ORC Writer, the native orc writer should be flushing its 
> records and freeing the memory.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to