[jira] [Commented] (HDFS-13540) DFSStripedInputStream should only allocate new buffers when reading
[ https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16488398#comment-16488398 ] Xiao Chen commented on HDFS-13540: -- Thanks a lot Sammi! > DFSStripedInputStream should only allocate new buffers when reading > --- > > Key: HDFS-13540 > URL: https://issues.apache.org/jira/browse/HDFS-13540 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Fix For: 3.2.0, 3.1.1, 3.0.3 > > Attachments: HDFS-13540.01.patch, HDFS-13540.02.patch, > HDFS-13540.03.patch, HDFS-13540.04.patch, HDFS-13540.05.patch, > HDFS-13540.06.patch > > > This was found in the same scenario where HDFS-13539 is caught. > There are 2 OOM that looks interesting: > {noformat} > FSDataInputStream#close error: > OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct > buffer memory > at java.nio.Bits.reserveMemory(Bits.java:694) > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > at > org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205) > at > org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181) > at java.io.FilterInputStream.close(FilterInputStream.java:181) > {noformat} > and > {noformat} > org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error: > OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct > buffer memory > at java.nio.Bits.reserveMemory(Bits.java:694) > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > at > org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205) > at > org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782) > at > org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48) > at > org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230) > {noformat} > As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the > buffer pool. We could save the cost of doing so if it's not for a read (e.g. > close, unbuffer etc.) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13540) DFSStripedInputStream should only allocate new buffers when reading
[ https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487136#comment-16487136 ] SammiChen commented on HDFS-13540: -- +1. Thanks [~xiaochen] for the contribution. Committed to trunk, branch-3.0 and branch-3.1. > DFSStripedInputStream should only allocate new buffers when reading > --- > > Key: HDFS-13540 > URL: https://issues.apache.org/jira/browse/HDFS-13540 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Fix For: 3.2.0, 3.1.1, 3.0.3 > > Attachments: HDFS-13540.01.patch, HDFS-13540.02.patch, > HDFS-13540.03.patch, HDFS-13540.04.patch, HDFS-13540.05.patch, > HDFS-13540.06.patch > > > This was found in the same scenario where HDFS-13539 is caught. > There are 2 OOM that looks interesting: > {noformat} > FSDataInputStream#close error: > OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct > buffer memory > at java.nio.Bits.reserveMemory(Bits.java:694) > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > at > org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205) > at > org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181) > at java.io.FilterInputStream.close(FilterInputStream.java:181) > {noformat} > and > {noformat} > org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error: > OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct > buffer memory > at java.nio.Bits.reserveMemory(Bits.java:694) > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > at > org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205) > at > org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782) > at > org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48) > at > org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230) > {noformat} > As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the > buffer pool. We could save the cost of doing so if it's not for a read (e.g. > close, unbuffer etc.) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13540) DFSStripedInputStream should only allocate new buffers when reading
[ https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487106#comment-16487106 ] Hudson commented on HDFS-13540: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14263 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/14263/]) HDFS-13540. DFSStripedInputStream should only allocate new buffers when (sammi.chen: rev 34e8b9f9a86fb03156861482643fba11bdee1dd4) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/ElasticByteBufferPool.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java > DFSStripedInputStream should only allocate new buffers when reading > --- > > Key: HDFS-13540 > URL: https://issues.apache.org/jira/browse/HDFS-13540 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-13540.01.patch, HDFS-13540.02.patch, > HDFS-13540.03.patch, HDFS-13540.04.patch, HDFS-13540.05.patch, > HDFS-13540.06.patch > > > This was found in the same scenario where HDFS-13539 is caught. > There are 2 OOM that looks interesting: > {noformat} > FSDataInputStream#close error: > OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct > buffer memory > at java.nio.Bits.reserveMemory(Bits.java:694) > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > at > org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205) > at > org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181) > at java.io.FilterInputStream.close(FilterInputStream.java:181) > {noformat} > and > {noformat} > org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error: > OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct > buffer memory > at java.nio.Bits.reserveMemory(Bits.java:694) > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > at > org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205) > at > org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782) > at > org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48) > at > org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230) > {noformat} > As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the > buffer pool. We could save the cost of doing so if it's not for a read (e.g. > close, unbuffer etc.) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13540) DFSStripedInputStream should only allocate new buffers when reading
[ https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484608#comment-16484608 ] genericqa commented on HDFS-13540: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 36s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 13s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 26s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 30m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 12s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 27s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 52s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 41s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}264m 41s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.client.impl.TestBlockReaderLocal | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | | | hadoop.hdfs.TestDFSClientRetries | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13540 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12924584/HDFS-13540.06.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a7d0630657db 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64
[jira] [Commented] (HDFS-13540) DFSStripedInputStream should only allocate new buffers when reading
[ https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484281#comment-16484281 ] Xiao Chen commented on HDFS-13540: -- Thanks Sammi. #1 should be fixed in patch 5 - it's due to the static nature of the pool. It seems jenkins had a glitch though: {noformat} [ERROR] error An unexpected error occurred: "https://registry.yarnpkg.com/realize-package-specifier/-/realize-package-specifier-3.0.3.tgz: Request failed \"504 Gateway Timeout\"". {noformat} #2 addressed in patch 6, which should trigger a new pre-commit > DFSStripedInputStream should only allocate new buffers when reading > --- > > Key: HDFS-13540 > URL: https://issues.apache.org/jira/browse/HDFS-13540 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-13540.01.patch, HDFS-13540.02.patch, > HDFS-13540.03.patch, HDFS-13540.04.patch, HDFS-13540.05.patch, > HDFS-13540.06.patch > > > This was found in the same scenario where HDFS-13539 is caught. > There are 2 OOM that looks interesting: > {noformat} > FSDataInputStream#close error: > OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct > buffer memory > at java.nio.Bits.reserveMemory(Bits.java:694) > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > at > org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205) > at > org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181) > at java.io.FilterInputStream.close(FilterInputStream.java:181) > {noformat} > and > {noformat} > org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error: > OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct > buffer memory > at java.nio.Bits.reserveMemory(Bits.java:694) > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > at > org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205) > at > org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782) > at > org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48) > at > org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230) > {noformat} > As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the > buffer pool. We could save the cost of doing so if it's not for a read (e.g. > close, unbuffer etc.) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13540) DFSStripedInputStream should only allocate new buffers when reading
[ https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483617#comment-16483617 ] genericqa commented on HDFS-13540: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 42s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 40s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 16s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 24m 32s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 8s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 7s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 25s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 31m 25s{color} | {color:red} root generated 188 new + 1277 unchanged - 0 fixed = 1465 total (was 1277) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 39s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 38s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 22s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 59s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}123m 40s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 45s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}276m 57s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13540 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12924469/HDFS-13540.05.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 0d9b970f4543 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Commented] (HDFS-13540) DFSStripedInputStream should only allocate new buffers when reading
[ https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483581#comment-16483581 ] SammiChen commented on HDFS-13540: -- Hi Xiao, the overall idea looks good to me. 1. There are two relevant unit tests failed. The error message is "expected:<0> but was:<2>". Maybe we can dig into why 2 buffers are allocated for a open stream which haven't read any content net. 2. @VisibleForTesting ahead of resetCurStripeBuffer is not necessary now. > DFSStripedInputStream should only allocate new buffers when reading > --- > > Key: HDFS-13540 > URL: https://issues.apache.org/jira/browse/HDFS-13540 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-13540.01.patch, HDFS-13540.02.patch, > HDFS-13540.03.patch, HDFS-13540.04.patch, HDFS-13540.05.patch > > > This was found in the same scenario where HDFS-13539 is caught. > There are 2 OOM that looks interesting: > {noformat} > FSDataInputStream#close error: > OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct > buffer memory > at java.nio.Bits.reserveMemory(Bits.java:694) > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > at > org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205) > at > org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181) > at java.io.FilterInputStream.close(FilterInputStream.java:181) > {noformat} > and > {noformat} > org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error: > OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct > buffer memory > at java.nio.Bits.reserveMemory(Bits.java:694) > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > at > org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205) > at > org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782) > at > org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48) > at > org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230) > {noformat} > As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the > buffer pool. We could save the cost of doing so if it's not for a read (e.g. > close, unbuffer etc.) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13540) DFSStripedInputStream should only allocate new buffers when reading
[ https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16483434#comment-16483434 ] Xiao Chen commented on HDFS-13540: -- Failed tests are related, because the pool is a static var of the class. Mock is pretty difficult as {{ElasticByteBufferPool}} is a final class, so went with the approach in [^HDFS-13540.05.patch] to test this while change as little to the {{ElasticByteBufferPool}} class as possible. > DFSStripedInputStream should only allocate new buffers when reading > --- > > Key: HDFS-13540 > URL: https://issues.apache.org/jira/browse/HDFS-13540 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-13540.01.patch, HDFS-13540.02.patch, > HDFS-13540.03.patch, HDFS-13540.04.patch, HDFS-13540.05.patch > > > This was found in the same scenario where HDFS-13539 is caught. > There are 2 OOM that looks interesting: > {noformat} > FSDataInputStream#close error: > OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct > buffer memory > at java.nio.Bits.reserveMemory(Bits.java:694) > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > at > org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205) > at > org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181) > at java.io.FilterInputStream.close(FilterInputStream.java:181) > {noformat} > and > {noformat} > org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error: > OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct > buffer memory > at java.nio.Bits.reserveMemory(Bits.java:694) > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > at > org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118) > at > org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205) > at > org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782) > at > org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48) > at > org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230) > {noformat} > As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the > buffer pool. We could save the cost of doing so if it's not for a read (e.g. > close, unbuffer etc.) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13540) DFSStripedInputStream should only allocate new buffers when reading
[ https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16482325#comment-16482325 ] genericqa commented on HDFS-13540: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 2s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 40m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 21m 1s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 25s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 30s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 31s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 16s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 41s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 11s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 49s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}285m 30s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedInputStream | | | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.client.impl.TestBlockReaderLocal | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd | | JIRA Issue | HDFS-13540 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12924303/HDFS-13540.04.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit