mehakmeet opened a new pull request #3406:
URL: https://github.com/apache/hadoop/pull/3406


   Region: US-West-2
   `mvn -Dparallel-tests=abfs -DtestsThreadCount=8 -Dscale clean verify`
   
   ### UT:
   ```
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 105, Failures: 0, Errors: 0, Skipped: 2
   ```
   ### IT:
   ```
   [ERROR] Errors: 
   [ERROR]   
ITestAzureBlobFileSystemLease.testFileSystemClose:301->lambda$testFileSystemClose$5:303
 ยป TestTimedOut
   [INFO] 
   [ERROR] Tests run: 559, Failures: 0, Errors: 1, Skipped: 75
   ```
   This error is due to how we are closing ThreadPool in AbfsStore and not in 
AbfsOutputStream like we used to. If we close Filesystem before we close the 
stream but after we have written the block, the ThreadPool is in a "Terminated" 
state and we are not able to submit the task. In this test, we are supposed to 
go through with the submit and receive LeaseNotPresent from the append task.
   
   ### IT-scale:
   ```
   [INFO] Results:
   [INFO] 
   [ERROR] Failures: 
   [ERROR]   
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:66->testReadWriteAndSeek:101
 [Retry was required due to issue on server side] expected:<[0]> but was:<[1]>
   [INFO] 
   [ERROR] Tests run: 259, Failures: 1, Errors: 0, Skipped: 40
   ```
   This test is failing due to an assert in Validating the TracingHeaders, in 
the request we have retryNum as "0", but we are retrying once. In the trunk, 
this is failing with "Out of Memory" exception for me, so not sure if this is 
actually passing in the trunk as well. Would appreciate it if someone can run 
this test in their setup.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to