[ 
https://issues.apache.org/jira/browse/HADOOP-17559?focusedWorklogId=560088&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-560088
 ]

ASF GitHub Bot logged work on HADOOP-17559:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 02/Mar/21 18:51
            Start Date: 02/Mar/21 18:51
    Worklog Time Spent: 10m 
      Work Description: steveloughran commented on pull request #2734:
URL: https://github.com/apache/hadoop/pull/2734#issuecomment-789132670


   Stack from the test failure fixed here *as seen in the audit PR*. Shows its 
not directly related to this, though I had to stop through to make sure
   ```
   java.lang.AssertionError: Number of records written after commit #2; first 
commit had 4; first commit ancestors 
CommitContext{operationState=AncestorState{operation=Commitid=55; 
dest=s3a://stevel-london/fork-0001/test/DELAY_LISTING_ME/testBulkCommitFiles/out;
 size=6; paths={s3a://stevel-london/fork-0001/test/DELAY_LISTING_ME 
s3a://stevel-london/fork-0001/test/DELAY_LISTING_ME/testBulkCommitFiles/out/file1
 s3a://stevel-london/fork-0001 
s3a://stevel-london/fork-0001/test/DELAY_LISTING_ME/testBulkCommitFiles/out 
s3a://stevel-london/fork-0001/test 
s3a://stevel-london/fork-0001/test/DELAY_LISTING_ME/testBulkCommitFiles}}}; 
second commit ancestors: 
CommitContext{operationState=AncestorState{operation=Commitid=55; 
dest=s3a://stevel-london/fork-0001/test/DELAY_LISTING_ME/testBulkCommitFiles/out;
 size=8; paths={s3a://stevel-london/fork-0001/test/DELAY_LISTING_ME 
s3a://stevel-london/fork-0001/test/DELAY_LISTING_ME/testBulkCommitFiles/out/file1
 s3a://stevel-london/fork-0001 
s3a://stevel-london/fork-0001/test/DELAY_LISTING_ME/testBulkCommitFiles/out 
s3a://stevel-london/fork-0001/test/DELAY_LISTING_ME/testBulkCommitFiles/out/subdir
 s3a://stevel-london/fork-0001/test 
s3a://stevel-london/fork-0001/test/DELAY_LISTING_ME/testBulkCommitFiles 
s3a://stevel-london/fork-0001/test/DELAY_LISTING_ME/testBulkCommitFiles/out/subdir/file2}}}:
 s3guard_metadatastore_record_writes expected:<2> but was:<3>
        at org.junit.Assert.fail(Assert.java:88)
        at org.junit.Assert.failNotEquals(Assert.java:834)
        at org.junit.Assert.assertEquals(Assert.java:645)
        at 
org.apache.hadoop.fs.s3a.S3ATestUtils$MetricDiff.assertDiffEquals(S3ATestUtils.java:1001)
        at 
org.apache.hadoop.fs.s3a.commit.ITestCommitOperations.testBulkCommitFiles(ITestCommitOperations.java:722)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
        at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
        at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.lang.Thread.run(Thread.java:748)
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 560088)
    Time Spent: 0.5h  (was: 20m)

> S3Guard import can OOM on large imports
> ---------------------------------------
>
>                 Key: HADOOP-17559
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17559
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.3.1
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> I know I'm closing ~all S3Guard issues as wontfix, but this is pressing so 
> I'm going to do it anyway
> S3guard import of directory tree containing many, many files will OOM. 
> Looking at the code this is going to be because
> * import tool builds a map of all dirs imported, which as the comments note 
> "superfluous for DDB". - *cut*
> * DDB AncestorState tracks files as well as dirs, purely as a safety check to 
> make sure current op doesn't somehow write a file entry above a dir entry in 
> the same operation
> We've been running S3Guard for a long time, and condition #2 has never arisen.
> Propose: don't store filenames there, so memory consumption goes from O(files 
> + dirs) to O(dirs)
> Code straightforward, can't think of any tests



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to