[ 
https://issues.apache.org/jira/browse/HADOOP-17905?focusedWorklogId=649580&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-649580
 ]

ASF GitHub Bot logged work on HADOOP-17905:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 11/Sep/21 12:19
            Start Date: 11/Sep/21 12:19
    Worklog Time Spent: 10m 
      Work Description: pbacsko opened a new pull request #3423:
URL: https://github.com/apache/hadoop/pull/3423


   … backing array size
   
   Change-Id: I3cfae85000c5fa7aa86c40c2d7efa282958178bf
   
   <!--
     Thanks for sending a pull request!
       1. If this is your first time, please read our contributor guidelines: 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
       2. Make sure your PR title starts with JIRA issue id, e.g., 
'HADOOP-17799. Your PR title ...'.
   -->
   
   ### Description of PR
   
   Allow org.apache.hadoop.io.Text to expand the underlying byte array to a 
safe maximum. 
   
   ### How was this patch tested?
   
   1. Ran unit tests
   2. Ran a MapReduce job where a single mapper processed a text file with a 
size of 1.8G with no line feeds. Array expansion was verified with extra 
printouts.
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

            Worklog Id:     (was: 649580)
    Remaining Estimate: 0h
            Time Spent: 10m

> Modify Text.ensureCapacity() to efficiently max out the backing array size
> --------------------------------------------------------------------------
>
>                 Key: HADOOP-17905
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17905
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Peter Bacsko
>            Assignee: Peter Bacsko
>            Priority: Major
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is a continuation of HADOOP-17901.
> Right now we use a factor of 1.5x to increase the byte array if it's full. 
> However, if the size reaches a certain point, the increment is only (current 
> size + length). This can cause performance issues if the textual data which 
> we intend to store is beyond this point.
> Instead, let's max out the array to the maximum. Based on different sources, 
> a safe choice seems to be Integer.MAX_VALUE - 8 (see ArrayList, 
> AbstractCollection, HashTable, etc).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to