[ 
https://issues.apache.org/jira/browse/HADOOP-17195?focusedWorklogId=648065&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-648065
 ]

ASF GitHub Bot logged work on HADOOP-17195:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 08/Sep/21 16:19
            Start Date: 08/Sep/21 16:19
    Worklog Time Spent: 10m 
      Work Description: mehakmeet opened a new pull request #3406:
URL: https://github.com/apache/hadoop/pull/3406


   Region: US-West-2
   `mvn -Dparallel-tests=abfs -DtestsThreadCount=8 -Dscale clean verify`
   
   ### UT:
   ```
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 105, Failures: 0, Errors: 0, Skipped: 2
   ```
   ### IT:
   ```
   [ERROR] Errors: 
   [ERROR]   
ITestAzureBlobFileSystemLease.testFileSystemClose:301->lambda$testFileSystemClose$5:303
 ยป TestTimedOut
   [INFO] 
   [ERROR] Tests run: 559, Failures: 0, Errors: 1, Skipped: 75
   ```
   This error is due to how we are closing ThreadPool in AbfsStore and not in 
AbfsOutputStream like we used to. If we close Filesystem before we close the 
stream but after we have written the block, the ThreadPool is in a "Terminated" 
state and we are not able to submit the task. In this test, we are supposed to 
go through with the submit and receive LeaseNotPresent from the append task.
   
   ### IT-scale:
   ```
   [INFO] Results:
   [INFO] 
   [ERROR] Failures: 
   [ERROR]   
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:66->testReadWriteAndSeek:101
 [Retry was required due to issue on server side] expected:<[0]> but was:<[1]>
   [INFO] 
   [ERROR] Tests run: 259, Failures: 1, Errors: 0, Skipped: 40
   ```
   This test is failing due to an assert in Validating the TracingHeaders, in 
the request we have retryNum as "0", but we are retrying once. In the trunk, 
this is failing with "Out of Memory" exception for me, so not sure if this is 
actually passing in the trunk as well. Would appreciate it if someone can run 
this test in their setup.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 648065)
    Time Spent: 1h 10m  (was: 1h)

> Intermittent OutOfMemory error while performing hdfs CopyFromLocal to abfs 
> ---------------------------------------------------------------------------
>
>                 Key: HADOOP-17195
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17195
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/azure
>    Affects Versions: 3.3.0
>            Reporter: Mehakmeet Singh
>            Assignee: Bilahari T H
>            Priority: Major
>              Labels: abfsactive, pull-request-available
>          Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> OutOfMemory error due to new ThreadPools being made each time 
> AbfsOutputStream is created. Since threadPool aren't limited a lot of data is 
> loaded in buffer and thus it causes OutOfMemory error.
> Possible fixes:
> - Limit the number of ThreadCounts while performing hdfs copyFromLocal (Using 
> -t property).
> - Reducing OUTPUT_BUFFER_SIZE significantly which would limit the amount of 
> buffer to be loaded in threads.
> - Don't create new ThreadPools each time AbfsOutputStream is created and 
> limit the number of ThreadPools each AbfsOutputStream could create.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to