[ 
https://issues.apache.org/jira/browse/HADOOP-17195?focusedWorklogId=653695&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-653695
 ]

ASF GitHub Bot logged work on HADOOP-17195:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 21/Sep/21 17:03
            Start Date: 21/Sep/21 17:03
    Worklog Time Spent: 10m 
      Work Description: mehakmeet opened a new pull request #3467:
URL: https://github.com/apache/hadoop/pull/3467


   Addresses the problem of processes running out of memory when
   there are many ABFS output streams queuing data to upload,
   especially when the network upload bandwidth is less than the rate
   data is generated.
   
   ABFS Output streams now buffer their blocks of data to
   "disk", "bytebuffer" or "array", as set in
   "fs.azure.data.blocks.buffer"
   
   When buffering via disk, the location for temporary storage
   is set in "fs.azure.buffer.dir"
   
   For safe scaling: use "disk" (default); for performance, when
   confident that upload bandwidth will never be a bottleneck,
   experiment with the memory options.
   
   The number of blocks a single stream can have queued for uploading
   is set in "fs.azure.block.upload.active.blocks".
   The default value is 20.
   
   Contributed by Mehakmeet Singh.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 653695)
    Time Spent: 7h  (was: 6h 50m)

> Intermittent OutOfMemory error while performing hdfs CopyFromLocal to abfs 
> ---------------------------------------------------------------------------
>
>                 Key: HADOOP-17195
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17195
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/azure
>    Affects Versions: 3.3.0
>            Reporter: Mehakmeet Singh
>            Assignee: Mehakmeet Singh
>            Priority: Major
>              Labels: abfsactive, pull-request-available
>          Time Spent: 7h
>  Remaining Estimate: 0h
>
> OutOfMemory error due to new ThreadPools being made each time 
> AbfsOutputStream is created. Since threadPool aren't limited a lot of data is 
> loaded in buffer and thus it causes OutOfMemory error.
> Possible fixes:
> - Limit the number of ThreadCounts while performing hdfs copyFromLocal (Using 
> -t property).
> - Reducing OUTPUT_BUFFER_SIZE significantly which would limit the amount of 
> buffer to be loaded in threads.
> - Don't create new ThreadPools each time AbfsOutputStream is created and 
> limit the number of ThreadPools each AbfsOutputStream could create.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to