[ 
https://issues.apache.org/jira/browse/GOBBLIN-1957?focusedWorklogId=891033&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-891033
 ]

ASF GitHub Bot logged work on GOBBLIN-1957:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 17/Nov/23 01:06
            Start Date: 17/Nov/23 01:06
    Worklog Time Spent: 10m 
      Work Description: homatthew commented on code in PR #3828:
URL: https://github.com/apache/gobblin/pull/3828#discussion_r1396555141


##########
gobblin-modules/gobblin-orc/src/main/java/org/apache/gobblin/writer/GobblinBaseOrcWriter.java:
##########
@@ -109,6 +111,7 @@ public GobblinBaseOrcWriter(FsDataWriterBuilder<S, D> 
builder, State properties)
     this.typeDescription = getOrcSchema();
     this.selfTuningWriter = 
properties.getPropAsBoolean(GobblinOrcWriterConfigs.ORC_WRITER_AUTO_SELFTUNE_ENABLED,
 false);
     this.validateORCAfterClose = 
properties.getPropAsBoolean(GobblinOrcWriterConfigs.ORC_WRITER_VALIDATE_FILE_AFTER_CLOSE,
 false);
+    this.rowCheckFactor = 
properties.getPropAsInt(GobblinOrcWriterConfigs.ORC_WRITER_BATCHSIZE_ROWCHECK_FACTOR,
 GobblinOrcWriterConfigs.DEFAULT_ORC_WRITER_BATCHSIZE_ROWCHECK_FACTOR);

Review Comment:
   I find this variable name a little ambiguous. The longer form of 
batchSizeRowCheckFactor makes a little more sense to me





Issue Time Tracking
-------------------

    Worklog Id:     (was: 891033)
    Time Spent: 1h  (was: 50m)

> Add feature to improve ORCWriter buffer sizes with large record sizes
> ---------------------------------------------------------------------
>
>                 Key: GOBBLIN-1957
>                 URL: https://issues.apache.org/jira/browse/GOBBLIN-1957
>             Project: Apache Gobblin
>          Issue Type: Improvement
>          Components: gobblin-core
>            Reporter: William Lo
>            Assignee: Abhishek Tiwari
>            Priority: Major
>          Time Spent: 1h
>  Remaining Estimate: 0h
>
> GobblinORCWriter self tune uses a number of metrics to determine how large 
> their buffers should be for both its own internal buffer used for conversion 
> and the native ORC writer buffer. However when there are very large record 
> sizes (100s of kb) the buffers default max size (e.g. 1000) can still hold a 
> very large amount of data. Observed performance would be hundreds of 
> megabytes to even a gigabyte depending on the configured batch size maximums.
> We want a configuration to impose a maximum buffer max size so that large 
> records in the buffer do not exceed the size of a stripe, so when it is added 
> to the native ORC Writer, the native orc writer should be flushing its 
> records and freeing the memory.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to